hi,
what is the best way for securing a sensitive data on a file server from AD admins , "trust does not exclude control"
PS : an AD admin can take bak owner on any shared folder then change the permissions.
thanks.
MCP - MCTS - MCSA - MCITP
hi,
what is the best way for securing a sensitive data on a file server from AD admins , "trust does not exclude control"
PS : an AD admin can take bak owner on any shared folder then change the permissions.
thanks.
MCP - MCTS - MCSA - MCITP
Dear all
I am migrating the File Server which is currently running on Windows Server 2008 R2 to Server 2016. How would I go about doing this using robocopy so I keep all the NTFS share and file permissions...?
I am using below
robocopy \\FileSrv1\home\ \\FileSrv2\home\ /S /E /COPY:DATSOU /R:1 /W:30 >C:\ share_Copy.log
or
robocopy \\ FileSrv1\share \\ FileSrv2\share /mir /S /E /COPYALL /R:3 /W:10 /tee /log:D:\ROBOCOPY\share_Copy.log
above will see the progress of the command.
Thanks
MS
Hi!
We have a 2012 R2 File server, on said server, we have a shared folder, with some subfolders, one of these subfolders have inheritance disabled, and its access restricted to a AD security group. A basic layout so to speak. The security group have full access, aswell as the built-in "Administrators" group on the server.
Under Advanced Sharing on the share itself, "Everyone" has read and change permissions, and the previously mentioned "Administrators" has everything.
As far as I'm concerned, the most restrictive folder access should be the one determining the users access, which in this case is the NTFS security AD group, however, something else is tampering with the permissions.
The issue:
Some users, random users, can still access and read/Write/change everything in this restricted folder, and I cannot for the life of me find a common denominator between these users.
Hi,
I wanna know which services in windows server are incompatible with DFS service? In other words, Which services should not be installed beside the DFS server?
Thanks
We are planning to migrate around 4000+ users home directory from one server to another server using Robocopy. We are sure that the Robocopy tool will migrate the data with ACL permission. But we want to know, whether the share level permission also reflected in the Robocopy command. Or do we need to share the folders one by one manually.
Thanks and Regards,
Hariharan
I've been beating my head against a wall for a couple of days on this. I'm starting to think Storage Replica is just broken in Server 2019.
This is not a complicated setup. I have:
I am attempting to enable replication between Site A and Site B using a storage replica in asynchronous mode. I am following instructions found here (for a general purpose file server): https://docs.microsoft.com/en-us/windows-server/storage/storage-replica/stretch-cluster-replication-using-shared-storage
When I try to enable replication via the Failover Cluster Manager GUI, or from powershell, or if I simply try to create a new replication group (using `new-srgroup`), I end up with the error in the title. For instance, in the GUI I am able to select my destination disk, my source log volume, and my destination log volume as well as other options. When I click Finish, I get an error that "The requested object could not be found."
Failed to create replication.ERROR CODE: 0x80131500;NATIVE ERROR CODE: 6.Unable to create replication group Replication 1, detailed reason: The requested object could not be found.
I have gone to powershell and tried similar things. I have tried using the actual volume drive letter, and the volume ID. I've confirmed, via eventvwr, that the GUI is trying to use the proper volume IDs to enable replication. Yet, I cannot get passed
the object could not be found error.
If I try to run `test-srtopology` I receive an error:
The specified volume F: cannot be found on computer node1. If this is a cluster node, the volume must be part of a role or CSV; volumes in Available Storage are not accessible...
So, I add the replica volume to the file server cluster role so that both the data volume and the replica log volume are part of the FSC01 role. Now, `test-srtopology` completes successfully and there are no warnings or errors. It tells me there are no issues if I want to proceed to replicate between Site A and Site B.
However, back in the enable replication GUI, it refuses to show me the Replica Log volume as an available option for the source log volume because the GUI will only show volumes that are in "Available Storage." So, there is a bit of cognitive dissonance between the enable replication GUI, and the `test-srtopology` powershell command. It doesn't make any sense to me that the test requires both volumes be assigned to a role, and the GUI to actually enable replication requires that the replica log volume is NOT assigned to a role. Kind of stupid. However, I also bypassed the GUI and tried using powershell to create a replication group while both volumes are assigned to the FSC01 role, and I still get the error in the title.
I'm trying to use robocopy (as I have always done) to transfer data to another hard drive, but it doesn't seem to be working. I am not getting an error, but it is not transferring data or showing that it is transferring data. As far as I know, my path is correct! Has Microsoft done something to Robocopy so it no longer works?
Thank you,
Todez
It's a pretty common practice for SQL backed file stores to be formatted with 64k allocation unit size. SQL sort of has its own file system that sits ontop of NTFS anyway, so its a very large single file.
But, is using that 64k as a standard for a regular file store a good idea?
I'm trying to understand specifically what is going on at like the disk read/write head level when it comes to picking an allocation unit. Let's say I have a volume and its using 64k allocation unit size. Let's say I have an application that is copying some
files, but is also writing to log files frequently. Let's also say that for diagnostic reasons, the logger opens the file (for append), writes the log message, flushes, then closes the file. It might do this many many times during the session.
Given the above, Is this a true statement?: If the logger writes 10 bytes to the file, the entire 64k allocation unit is re-written (so IO of 64 bytes written occurred). If the logger writes a 1000 byte message, again, 64k is actually written.
Or, is it smarter than that, and will only write the specific number of bytes related to the file IO operation?
Hi,
I have set up 2 nodes failover cluster using amazon ec2 instances and storage spaces(2 x windows 2016 servers).
After I resize physical disk I am not sure if it is possible to see this change in storage pool.
If I run command Get-PhysicalDisk it is showing to me the same size.
second Dc controller is shown private network
and error in operations mater rules
and in the event viewer, these events appear:
and this link for Dcdiag, and DC Ipconfig and replication.
https://1drv.ms/f/s!Ag-u-xMhnkazgm8g2xdDHBiNyqKg
Hi everyone!
Is there a way to create a file screen category with more than 2000 entries?
Thanks
Doria
Sometimes we got some annoying things...
files are being created since 2007, and, several years later, several robocopy migration later, I have a Win2012R2 8 TB file server with 7 MILLION files in it
Now, i´m trying to generate a TXT file with a simple thing: all the names of all the files in it. but, a lot of folders have its size more than 255 chars (thanks to the end user and windows explorer and other tools, that allows users to create so big file names and folders, but i´m digressing...)
what i want is pretty simple: dir /s /a > LISTING.TXT
so, what are my alternatives? powershell? VBS?
Hello
I have been using Storage spaces for over 2 years without much of a problem until now.
I have 6 drives in the pool totaling about 10tb.
10 days ago, I was doing some clean up and deleted 350gb of data but the pool availability did not change.
I noticed that the recycle bin was full so I figured that I need to empty it out. Well that's where it went wrong.
Since emptying the recycle bin, the storage space drive disappeared from the windows explorer. I can see the drives are "working" and if I go in Storage Spaces, it is showing all 6 drives are in good condition with about 70% capacity. All options other than rename are grey out.
Now it has been 10 days now and I re-started my computer once to see whether it did any good, well no, it did not.
Does everyone knows what's wrong or should I just wait it out for the pool to sort itself out.
Please advise.
Many thanks in advance
Henry
I can't find where to set auto cleanup temp file in window server 2016. It doesn't show me storage sense under storage such I can set auto cleanup. Is there such a feature in windows server 2016?
Thank you,
NLuu
Hi Experts,
We're planning of creating a raid 5 array in our Dell Poweredge R530 using the PERC H330 controller that comes with the server.
We will use 3 X 4TB SATA for the array, but I have come across few forums that the H330 is bad idea for a parity raid array since the controller do not have cache and the performance of the array is quite slow.
What should be the best option for us in this situation? windows server software raid or H330 controller?
Thanks in advance.
Hi all,
We have a file server that's currently scheduling backups once a day using the Microsoft Azure Backup agent. I can successfully mount a backup but am getting an error stating that "You currently don't have permission to access this folder" when trying to access a specific folder. The folder in question has 'share\Administrators' as the owner, as do all the folders however only 'Domain Admins' is part of the list of users who is able to access the content. I am logged into the server as my own account that has domain admin permissions and am unable to change the owner of the folder as it's write protected (I suspect, due to being accessed via the backup agent directly). Other than copying the folder out and amending permissions, is there any way that we can set this up to be readily accessible?
Hi
I currently have three servers in DFS replication using hub and spoke topology
I want to change this to mesh topology.
If I pick a replication group select new topology. Delete the current hub and spoke and select Mesh will this delete current data.
Or will it compare the missing files and perform replication.
There doesn't appear to be a primary server selection so hoping there won't be an authoritative server.
I have one folder with a 200gb discrepancy. Will this just replicate the extra data to the other servers?
Regards
Jimi