I have been trying to setup our 2012 R2 file server to prevent users deleting files, but still allow them to modify. No matter what I try when users don;t have the delete permission they cannot modify either. Is this possible?
Remove delete permission but still allow modification of files
File Server access permissions
Hello guys,
In our company we have mapped X drive for file sharing.
So there are folders already created by sysadmin for our employees.
These folders are hidden for all other users and see only those who have the right to access on them.
For example: If only user1 has access permissions on folder1 only that user can see this folder.
At the same time, all domain users can write some files in that X drive as well.
So, we need that no one in our domain has write permissions in X drive and only concrete users could write files on specific folders created by system administrator.
Can you give us your suggestions or do you have some experience on this issue ?
Thanks in advance,
DFS Replication Event ID 4312 on Windows Server 2012 R2
Hi,
We have two servers using DFR Replication and on a single drive we have two Shared Folders. DFR replication is working on the one folder but not on the second folder. We keep getting the Error, below. We have removed the shares deleted the DFSR folder under the system volume with no success. Our issue is we just do not know why it is failing, what should we be looking as?
=====================
The DFS Replication service has been repeatedly prevented from getting folder information due to consistent sharing violations encountered on a folder. The service is unable to replicate the folder and files in that folder until the sharing violation is resolved. Event ID: 4312
====================
DFS Path Missing
Hi,
I keep seeing PC where they cant access the DFS path.
I can see
\\domian.co.uk\UK\ then from here eveything is missing.
But If i create this reg HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\CSC\Parameters\FormatDatabase
then everything start working but some time doesnt what i do it doesnt work but I can access the path if I type in the DC name e.g. \\DC1\UK\path1\Path2
Can someone please explain to me why this is keep happening and how do i fix this
Open Files Disappear
Hello,
I don't understand what happened on my files servers ; when I open a file with a user (random user or admin) : the open file appear in the list below, but few seconds later : the file disappear in the list, whereas the file is still open by the user ...
Can someone explain that ?
I work on this fact cause I wanted to find a way to know if a file (html file) is already open by another user ; and if it is already open : prevent new openning.
Thnaks for your help
How to expand Storage Replica Volume
Hi folks,
I´ve an fileserver with 3 partitions which is in a storage replica with an second fileserver.
How can I expand the disk size from one partition?
I found in Technet to set on each Server "Set-SRGroup - name XX -AllowVolumeResize $true" and then?
On the first server I can expand the disk over Server Manager "Disk Management" but not on the second?
Regards
VSS file copy of SMB share over Storage spaces direct
Folks
I want to implement storage spaces direct over 2 servers. on top of this storage spaces I want to implement files service. ("smb file share"). I want then to do the snapshotting from this file system by using the scheduler of VSSadmin.
Is this working ?
Is it possible to pick up the earlier versions of files that have been changed ( and stored as snapshot) and redirect to "another place for being copied to an off-line store.
Has anyone any exeprience
Storage Replica - change replica network - Set-SRNetworkConstraint
Hello,
I tried to change for my storage replica the network to an dedicated replcation network.
On Server 1 Replicationetwork has "InterfaceIndex 5"
On Server 2 Replicationetwork hast "InterfaceIndex 3"
I use from Server1:
Set-SRNetworkConstraint -SourceComputerName "Server1" -SourceRGName "Server01rg01" -SourceNWInterface 5 -DestinationComputerName "Server2" -DestinationRGName "Server02rg01" -DestinationNWInterface 3
but I get an error:
Set-SRNetworkConstraint : Die Netzwerkeinschränkung für die Replikationsgruppe Server1 kann nicht aktualisiert
werden.
In Zeile:1 Zeichen:1
+ Set-SRNetworkConstraint -SourceComputerName Server1 -SourceRGName Server1 ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (MSFT_WvrAdminTasks:root/Microsoft/...T_WvrAdminTasks) [Set-SRNetwo
rkConstraint], CimException
+ FullyQualifiedErrorId : Windows System Error 64,Set-SRNetworkConstraint
Any ideas?
Regards,
alpina
Storage Spaces Direct - unable to configure Journal (cache) drives
I try to configure Storage Spaces Direct (S2D) on Dell R730xd, 3-node cluster. Each server has 2xSSD and 4xSATA drives dedicated to S2D pool. I have a simpble pass-through HBA controller, Dell HBA330, recomented by Dell for S2D solutions.
The problem is I cannot set the SSD drives as cache (Journal) drives.
Cluster validation for S2D is successfull.
Manually setting SSD drives as Journal has no effect.
Do you have any suggestions?
-- Konrad Puchala
Forum FAQ: Temporary files are not replicated by DFSR
Summary
Someone may notice that DFS Replication (DFSR) is not replicating certain files even if most of the other files can be replicated successfully; the reason is that the temporary attribute is set on these un-replicated file.
By design, DFSR does not replicate files if they have the temporary attribute set on them, and it cannot be configured to replicate it. The reason DFSR does not replicate files with the temporary attribute set is that they are considered short-lived files that you would never actually want to replicate. Using the temporary attribute on a file keeps that file in memory and saves on disk I/O. Therefore applications can use it on short-lived files to improve performance.
Symptom
Supposed you have setup DFS shares and DFS replication group between 2 or more DFS replication servers. Most of the content under the DFS target folder can be replicated to another DFS server; however, only a few of files cannot be replicated.
When you use Fsutil to check the un-replicated file, you will see Temporary Attribute on the File.
For example: Checking the Temporary Attribute on a File
fsutil usn readdata c:\data\test.txt
Major Version : 0x2
Minor Version : 0x0
FileRef# : 0x0021000000002350
Parent FileRef# : 0x0003000000005f5e
Usn : 0x000000004d431000
Time Stamp : 0x0000000000000000 12:00:00 AM 1/1/1601
Reason : 0x0
Source Info : 0x0
Security Id : 0x5fb
File Attributes : 0x120
File Name Length : 0x10
File Name Offset : 0x3c
FileName : test.txt
File Attributes is a bitmask that indicates which attributes are set. In the above example, 0x120 indicates the temporary attribute is set because that is 0x100 and 0x20 (Archive) = 0x120.
Here are the possible values:
READONLY | 0x1 |
HIDDEN | 0x2 |
SYSTEM | 0x4 |
DIRECTORY | 0x10 |
ARCHIVE | 0x20 |
DEVICE | 0x40 |
NORMAL | 0x80 |
TEMPORARY | 0x100 |
SPARSE_FILE | 0x200 |
REPARSE_POINT | 0x400 |
COMPRESSED | 0x800 |
OFFLINE | 0x1000 |
NOT_CONTENT_INDEXED | 0x2000 |
ENCRYPTED | 0x4000 |
Resolution
Removing the Temporary Attribute from Multiple Files with Powershell
To remove the temporary attribute, we can use PowerShell which can be installed from here. After PowerShell is installed, please open Powershell prompt (Start, Run, Powershell or from the Programs menu) and run this command to remove the temporary attribute from all files in the specified directory, including subdirectories (in this example, D:\Data):
Get-childitem D:\Data -recurse | ForEach-Object -process {if (($_.attributes -band 0x100) -eq 0x100) {$_.attributes = ($_.attributes -band 0xFEFF)}}
Note: If you don’t want it to work against subdirectories just remove the -recurse parameter.
More Information
DFSR Does Not Replicate Temporary Files
http://blogs.technet.com/askds/archive/2008/11/11/dfsr-does-not-replicate-temporary-files.aspx
Applies to
Windows Server 2008, Windows Server 2008 R2
Storage Replica + VSS = problem
Hello
I have two servers with Windows Server 1803. Currently I have a volume of 60TB (replicated via Storage Replica) between these servers and with VSS configured on the same volume. This setup worked for a few days, and now every time the VSS service tries to take
a snapshot of the volume, the server hangs and I'm forced to reboot or wait a few hours until I regain control of my server. In the Event Viewer I have the following message:
VssAdmin: Unable to create a shadow copy: The shadow copy provider timed out while flushing data to the volume being shadow copied. This is probably due to excessive activity on the volume. Try again later when the volume is not being used so heavily.
Except that the snapshot runs overnight without any user access, and no other parallel jobs. My VSS configuration:
For volume: (E:) \\? \ Volume {f26d0547-1ad9-4080-866e-24f02752ac93} \
Shadow Copy Storage volume: (E:) \\? \ Volume {f26d0547-1ad9-4080-866e-24f02752ac93} \
Used Shadow Copy Storage space: 27.4 GB (0%)
Allocated Shadow Copy Storage space: 74.6 GB (0%)
Maximum Shadow Copy Storage space: 9.00 TB (15%)
I confess I do not understand where the problem comes from. Any idea? Do I need to configure VSS on another volume?
Sorry for my bad english.
Thank you.
msDFS-NamespaceAnchor missing.
Hi,
one of our namespaces are getting the 'element not found' when investigating i have found the msDFS-NamespaceAnchor is no longer in AD.
I am unsure on how to get this back and why it would have vanished in the first place. We have 2 more namespaces running fine.
DFS is on a stand alone server and does not replicate.
Any help would be greatly appreciated
SQL Windows Authentication Issues
Is NTFS range tracking ever working?
Hi,
On a Windows 10 client range tracking is enabled on NTFS volume E:. From the query (DeviceIoControl(FSCTL_QUERY_USN_JOURNAL)), RangeTrackngChunkSize is 16KB, and RangeTrackFileSizeThreshold is 1MB. But for a file larger than 2MB if the first 2 bytes and the last 2 bytes are modified before the file close, I believe this will have two extents. But there is only one extent, any change inside the large file is considered as a single change starting from offset 0, w/ file size as the extent length. This is NOT correct. See below logs (from my own program) for details.
// Log starts below...
Range tracking is enabled on this journal since USN 760
Journal Info...
MinSupportedMajorVersion=2
MaxSupportedMajorVersion=4
RangeTrackChunkSize=16384
RangeTrackFileSizeThreshold=1048576
FirstUsn: 0
NextUsn: 8992
======USN Record V3======
USN: 8672
File name: large.txt
Reason: 4
======USN Record V3======
USN: 8752
File name: large.txt
Reason: 6
======USN Record V4======
USN: 8832
Reason: 80000006
RemainingExtents: 0
NumberOfExtents: 1
ExtentSize: 16
Extent 1: Offset: 0, Length: 2129920
======USN Record V3======
USN: 8912
File name: large.txt
Reason: 80000006
Press any key to continue..
Thanks,
Jing
DeviceIoControl(FSCTL_USN_TRACK_MODIFIED_RANGES) does not work to change @ChunkSize and @FileSizeThreshold
Hi,
I'm trying to enable the range tracking feature on a NTFS volume on Windows 10 desktop. The range tracking is enabled. But the chuck-size and the file-size-threshold can never be changed. They are always 16384 and 1048576, respectively.
For example:
C:\WINDOWS\system32>fsutil usn queryjournal e:
Usn Journal ID : 0x01d4606e19f40518
First Usn : 0x0000000000000000
Next Usn : 0x0000000000000668
Lowest Valid Usn : 0x0000000000000000
Max Usn : 0x7fffffffffff0000
Maximum Size : 0x0000000000400000
Allocation Delta : 0x0000000000100000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Enabled
Write range tracking chunk size: 16384
Write range tracking file size threshold: 1048576
C:\WINDOWS\system32>fsutil usn enablerangetracking c=1024 s=2048 e:
C:\WINDOWS\system32>fsutil usn queryjournal e:
Usn Journal ID : 0x01d4606e19f40518
First Usn : 0x0000000000000000
Next Usn : 0x0000000000000668
Lowest Valid Usn : 0x0000000000000000
Max Usn : 0x7fffffffffff0000
Maximum Size : 0x0000000000400000
Allocation Delta : 0x0000000000100000
Minimum record version supported : 2
Maximum record version supported : 4
Write range tracking: Enabled
Write range tracking chunk size: 16384
Write range tracking file size threshold: 1048576
I did try w/ other numbers, but it turns out that they are fixed to 16384, 1048575. No error is returned.
Also, I tried DeviceIoControl(FSCTL_USN_TRACK_MODIFIED_RANGES) which succeeds w/ error. But the @ChunkSize and @FileSizeThreshold just don't take effect.
Is this a known issue?
Thanks,
Jing
make subfolders of 2 shared folders visible with DFS Namespace
I have two folders shared from one of the server. they both are mapped to the user sessions with 2 different drive letters.
Now I want to create a new shared folder which includes both of them, I found out I can use DFS name space.
I need to have one shared folder which has all the sub folders of both of the other folders.but when I create the DFS NS and add both folders as a target folder, it just shows the sub folders of one of the folder.
Am I missing something here? is this really possible with DFS NS? if so, can you please provide the right steps for this?
there is not enough space available on the disk (s)to complete this operation
When I try to expand the disk on my windows 2008 server I get this error:
there is not enough space available on the disk (s)to complete this operation
any suggestion are greatly appreciated
Thanks
AL
Intermittent file share issues Server 2012
Hi Technet people.
We have an intermittent file share issue happening around once a week (Each Weekend) at seemingly random times, started around 3 weeks ago.
TIMELINE –
Week 1 – Sunday 07:00 - Issue resolved itself with no action taken, file shares were unavailable for around 15 mins
Week 2 – Saturday 7PM – Failed the cluster services over to passive node, issue cured. Monday 14:43 - Issue resolved itself with no action taken, file shares were unavailable for around 15 mins
Week 3 – Sunday 03:20ish – Failed the cluster services over to passive node, issue cured
The issue –
File server setup for only this purpose, Physical server 2012, clustered, with shared storage
File shares are intermittently not available (even the admin shares), sometimes they come back on their own after around 15 – 20 minutes.
Troubleshooting during the issue –
Server is
- Unable to UNC to any share C$ etc.
- Server is Pingable
- RDP works
- All shares show online in the failover cluster manager
- Issue has happened on both nodes, one node is up to date with windows updates one is not (2 months behind) during the issue the passive node is not affected.
- No specific errors in the windows event log
- We have another file cluster at a different site, exactly the same OS / roles / hardware / firmware / Storage etc, not experiencing any issues.
As I said above sometimes after 15 – 20 minutes the issue resolves itself, or a failover to the passive node resolves the issue. We became aware of this problem as the server hosts a folder redirection share. When the file shares are unavailable this seems to crash computers that use folder redirection, and they are unusable during the outage.
The monitoring tool has not reported any issues with the server and it is monitoring, disks, cpu, mem, cluster services every 5 mins.
Google is of no help and there are no errors in the logs, no other servers show any issues etc. The current suspect is the antivirus however the version is in use on all other servers and presenting no issues.
NetBackup has been running during the issues but again this runs on all other servers and nothing has changed.
Any help or suggestions would be much appreciated
Windows Server 2016 Datacenter Storage Replica Question
Hello,
I have 2 physical servers (with Hyper V) that are running on Windows Server 2012 Standard version and using failover cluster "Cluster A". Those servers are connected to a SAN Storage "Storage 1" through a SAN Switch.
My plans was to buy two new servers “Server 3” and “Server 4” and one new storage “Storage 2”.
I will get those new servers racked and patched to the same SAN Switch.
I will install Windows Datacenter 2016 on “Server 3” and Windows server 2016 standard on “Server 4”
Install HyperV and Failover Cluster Manager on the new Servers
Add the two new servers to the current cluster “Cluster A”.
Move the Vms and services to the new servers “Server 3” and “Server 4” (live migration)
I will use Cluster Manager to PAUSE and then EVICT the old servers “Server 1” and “Server 2” from the cluster
Upgrade the cluster from version 2012 to 2016
Connect the new storage “Storage 2” to the cluster “Cluster A”.
Move the HyperV machines from “Storage 1” to “Storage 2”
Install windows server standard 2016 on the old servers “server 1” and “Server 2”
Create a new cluster “Cluster B”. for “Server 1” and “Server 2”
Attach “Storage 1” to the cluster B that host “Server 1” and “Server 2”
Enable storage replica service on windows server 2016 "Server 3"
Graph Bellow:
Will my plan work? Should i upgrade all the servers to Windows Server 2016 Datacenter? Or only "Server 3" can be responsible on the storage replica.
Simple and safe way to backup files using Robocopy
Hi
I have about 8Tb of data to backup onto an external drive which has a capacity of 7.3Tb.
What is a simple and safe command to back up all the older files ("/Minage:20180101")?
I want to avoid the problems I have had with the Windows 7 copy - (Lost date stamps and access issues for example).
To me it looks like
Robocopy S:\ E:\Backup /E /dcopy:T /minage:20180101
Where I want all folders in the S: copied (before 20180101) to a folder I have created on the external E:\backup.
Is that good?
I also want to avoid it stopping in mid process because of files being open or for some other reason. It must just skip them. do I add /R:0 at the end of the command line so it doesn't keep repeating or crash?
I am in Windows 7 but would like to know if Windows 10 would be any different.
Thanks
Justin