Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Network Share permissions prevent word and excel amendments

$
0
0

Hi,

I am having issues with permissions on a network share. I have created an AD security group and placed some users in it. I have then added this group to the top level folder call 'Company' on a 2012 server and propagated the permissions down through all files and folders.

The advanced permissions for this group are allow on all apart from 'delete' and 'delete subfolders and files'.

The issue is that when users try to amend Excel and Word files, they get an access denied error. They can edit .txt and .png files, so I suspect this to be an issue with Office related files.

After giving the users 'delete' and 'delete subfolder and files permissions', they are then able to amend office documents and spreadsheets.

Any ideas?

Thanks,

Sam


Set-DfsnFolderTarget : The requested object could not be found.

$
0
0

Getting error when running: Set-DfsnFolderTarget to use FQDN

Set-DfsnFolderTarget : The requested object could not be found.

autochk - What is mean /q key?

$
0
0
Hi,

I noticed in Server editions system run disk check with:

autochk /q /v *

What is mean /q switch?

Thank you.



Creating dump

$
0
0
What would be the page file size required to create a dump? Does it depend upon the amount of RAM on the server?

Cannot access shared drive when folder name consists of particular CHN char

$
0
0

Hi,

Full path on server D:\shared_e\home\admin bill\ang\港

Share path : \\TESTAD01\home\admin bill\ang\港

OS : Windows Server 2008 R2 Standard

The shared folder is not accessible with this particular Chinese character on a client machine.  We tried replacing with another character, the shared drive is accessible.  However, the folder is accessible from the server.

The Fonts have been applied on the server as well as on the client machine.

Any advice on what can cause this issue is greatly appreciated.

Error 0x8056536c when enabling deduplication on Windows Server 2016

$
0
0
We're having a hard time enabling deduplication in the drives of our Hyper-V cluster. It worked OK on Windows 2012 R2 but we're getting this error since we upgraded to 2016:

Start-DedupJob : MSFT_DedupVolume.Volume='d:' - HRESULT 0x8056536c, Trabajo de desduplicación no admitido al implementar la actualización del clúster.
En línea: 1 Carácter: 1
+ Start-DedupJob -type Optimization -Volume d:
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   + CategoryInfo          : NotSpecified: (MSFT_DedupJob:ROOT/Microsoft/...n/MSFT_DedupJob) [Start-DedupJob], CimException
   + FullyQualifiedErrorId : HRESULT 0x8056536c,Start-DedupJob

English translation of the error message would be something like "Deduplication job not supported when deploying cluster update". We can't find any information at all regarding 0x8056536c.

Any clues?

Thank you and best regards.

DFS file server list

$
0
0
how to get dfs file server list in domain.in my domain i have many dfs file server i need inventory for the file server.how to generate a report for dfs file servers

NTFS permission removed when deleting a subfolder

$
0
0

Hello everyone,

I do have a very strange issue with my file server. Let me first describe the infrastructure.

OS: Windows Server 2016
Roles: File and Storage Services
Type: Member of a 2016 Domain

On the file server I do have to following structure/permission:

  • F:\
    • ANWDTest
      • ZZZ
        • DIMS
        • Wagenbuch

The NTFS permissions to those folders is like that:

  • ANWDTest
    Inheritance disabled
    CREATOR OWNER - Full control - Subfolders and files only
    SYSTEM - Full control - This folder, subfolders and files
    Administrators - Full control - This folder, subfolders and files
    L_NTFS_J_R - Read & execute - This folder only

  • ZZZ
    Inheritance enabled
    L_NTFS_J_ZZZ_R - Read & execute - This folder only

  • DIMS
    Inheritance enabled
    L_NTFS_J_ZZZ_DIMS_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_DIMS_W - Modify - This folder, subfolders and files

  • Wagenbuch
    Inheritance enabled
    L_NTFS_J_ZZZ_Wagenbuch_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_Wagenbuch_W - Modify - This folder, subfolders and files

So far I think this is nothing special, now here is my issue:

When I delete the "Wagenbuch" or the "DIMS" folder this does remove the group "L_NTFS_J_ZZZ_R" from the "ZZZ" folder AND does remove the group "L_NTFS_J_R" from the "ANWDTest" folder... and I do have absolutly no idea why this is happening.

Does anyone see an error in the setup or did face similar issues? I am totally lost here, even no idea where to start searching.. Google did also not help at all.

Thanks for the support!


UPDATE 1: To be sure that is not an issue of our file server - I did setup the same structure on an other 2016 server, and did face the same issue.

UPDATE 2: In the meantime I did the same setup on a 2012 R2 server and there is no issue at all, so this seems to be related to Server 2016.


DFS-R Backlog Appears Stuck

$
0
0

I inherited 2 DFS-R hosts connected via a WAN.  We'll call them Michigan and California.  We have several replication groups, one of which is massive, in the realm of 1.4million files and 1.6 Terabytes after deduplication.  About 2 weeks ago, all of a sudden we had a 1.4 million file back log.  For whatever reason, that backlog jumped back up to 1.4 million at least once since then.  Fast forward to earlier this week and the backlog of files being sent from California to Detroit is ZERO. However, the backlog of files being sent from Detroit to California is seemingly stuck at right around 80,000 and growing (as users continue to make changes).  For the life of me, I can't figure out what's the hold up.  The DFSR logs are pretty much greek to me.  I've seen suggestions to disable membership of the backlogged node, wait for the changes to replicate in AD and get picked up by the member servers, then re-enable it, to kick off an initial sync.  The problem is, I'm under the impression if I do that I'll lose data that hasn't replicated from Detroit to California yet, since California would become the "master".  Sure I can run a preemptive backup, but there's still a chance that someone will change something while that 18 hour backup runs.

Are there any ideas of what I can do.  I'd love to narrow this down to figure out what exactly is the hold up.  I'm at my wit's end with this thing.

Long term, the plan is to replace this current DFS solution with something else or to at least prune it down and/or break it into smaller parts.  However, I'm stuck with what I've got for the moment.

Any help would be GREATLY appreciated.

Unable to acess dfs path for a user

$
0
0

Hi,

I have setup homedrive for each user and I have two users. This is only happen on one PC for this user

user 1 which can access the dfs path and direct path to server share

user 2 which can't access the dfs path but it can access the to server share

Same user 2 can access the dfs path on different PC.

Things I tried.

1. reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Csc\Parameters /v FormatDatabase /t REG_DWORD /d 1 /f rebooted the pc still same problem

2. install rsat tool and run dfs cache remove.

  • dfsutil cache domain flush
  • dfsutil cache referral flush
  • dfsutil cache provider flush
  • dfsutil /PktFlush
  • dfsutil /SpcFlush
  • dfsutil /PurgeMupCache

I dont know what else to try to fix this problem


What is the "Correct" way to access a DFS file share?

$
0
0

I am setting up DFS Namespaces and replication for the first time, and I have everything setup.  However, there seems to be more than 1 way I can access the network shares.  Before I configure my Group Policy and Logon Script drive mapping, I was wondering if there was a "Correct" way to do it.

For my example, I have created a DFS called "Data" (shown in DFS Management Namespaces as \\domain.local\data).  Under this I have created 2 DFS "Folders" called "Documents" and "Training".  Each of those 2 folders each have my 4 file servers as folder targets (replication is setup and functioning well).  Documents has \\Server1\Documents, \\Server2\Documents, etc.  Training has \\Server1\Training, \\Server2\Training, etc.

I can access these shares via the DFS Namespace through either of the following UNC paths  (I'll use the Documents one for this example) -

\\domain\Data\Documents
\\domain.local\Data\Documents
\\domain.local\Documents

Which one is correct, or can I use any?  All my users are used to \\Server\Share without having the middle "data" path in there, but just accessing via \\domain\Documents does not work.  I'm inclined to use the \\domain.local\Documents, but almost all the examples I find about DFS include the "Data" middle path.

Thanks in advance!

-Brad

PS - Let me add that eventually I'd like different permissions on my DFS file shares.  What I read was that the local share permissions and security permissions are what is used for access (so make sure they match across all servers), so I was confused on the permissions I should grant to the DFS Share "Data" when I created it, so I just gave everyone read / write access.  So maybe that is a reason why I should use the \\domain.local\Documents convention so the permissions on that share are what come through...

PPS - ok, so each of these servers is at a remote branch, with <15 computers at the branch, so not only are they sharing files, they are also domain controllers.  So I think the reason why my \\domain.local works is because my shares and the DFS Namespace is referencing the same server.  If my shares were on a different server, they would not resolve to \\domain.local\Documents because domain.local resolves to that branch domain controller.  So the correct way to do it must be to us the \\domain\data\Documents, right?  Do I need to include the .local in the path or is it not necessary?  Hopefully the correct permissions get applied.

SMBClient Errors: 30611 followed by multiple 30906

$
0
0

Hello,

I have researched online but could not find anything relevant. A hyper-v managed windows server 2016 instance is crashing and throws multiple events of:

"The IO operation at logical block address 0x###### for Disk 0 "

 On the hyper-v host, after the VM has crashed there has been a pattern of a SMBClient 30611 error of:

"

Failed to reconnect a persistent handle.

Error: The account is not authorized to login from this station.

FileId: 0x200000E0265CBEE:0x20E000000A9
CreateGUID: {b3d6066e-563c-11e8-a949-0002c937dda1}
Path: \networked\path\to\instance.vhdx


Reason: 201

Previous reconnect error: STATUS_SUCCESS
Previous reconnect reason: The reason is not specified

Guidance:
A persistent handle allows transparent failover on Windows File Server clusters. This event has many causes and does not always indicate an issue with SMB. Review online documentation for troubleshooting information.

"

Followed by several 30906 errors:

"

A request on persistent/resilient handle failed because the handle was invalid or it exceeded the timeout.

Status: The transport connection is now disconnected.

Type: Write (and Read)
Path: \networked\path\to\instance.vhdx
Restart count: 0

Guidance:
After retrying a request on a Continuously Available (Persistent) handle or a Resilient handle, the client was unable to reconnect the handle. This event is the result of a handle recovery failure. Review other events for more details.

"

Then the server crashed. If someone has any ideas or could point me in a direction to recover other more logs that would be super.

Thanks!

Storage Spaces (S2D) - Pool / Volume detailed specification and how the data is stored on physical disks

$
0
0

Hi,

Noticed that there seems to be a lack of tools that fully support data recovery from Storage Spaces (and S2D). There seems to be only one tool out there that may work. Most seem to support a pool that exists (healthy or degraded) for recovery. I'm looking at scenarios where a pool was deleted (no new pool as a best case scenario) and the slabs / metadata are intact (mostly or partially). These tools seem to allow creation of a Virtual RAID so that you can at least try and see the file system / files.

Limitation is that they don't know how to create a virtual pool and try and re-create volumes and file system based on a slab scan (with the exception of one tool).

I am interested to learn more about storage spaces and know how to develop software, curious to know if anyone knows of good sources (including Microsoft) of information on the data structures of the pools and volumes and how that structure fits on physical disk?

Have performed a lot of searching and there is very little information (virtually high level details). Thought I'd reach out before trying to reverse engineer the technology.

Thanks

Can't access server's files via SMB but the server can access others.

$
0
0

Hi, I got a weird issue today:

I have 3 servers: A B C, A and B are on the same network, C is on a different one.

A and B can reach each other files via SMB. However, A and B are not reachable by C (via SMB of course). BUT, A and B can connect to C.

I've tried to disable firewall on A, but it's still not reachable by C. Tried to restart related services as well.

All 3 server are Windows 2016 (1607) with latest updates.

Storage spaces direct volume degraded

$
0
0

Hi,

I've setup 4 node S2D and i have issues with volume/virtualdisk

repair storage job is suspended with exception error code: 50001. volume operational/health state altering between degraded/warning to noredundancy/unhealthy to inservice/warning and this is repeating again and again.

there are issues showing on physical disks when i run get-physicaldisk with status

3  disks showing 'Lost communication'  and other 4 disks showing 'transient error'

Does this mean these disks failed? how to i confirm these are definitely failed?

Also, I have removed one of the disk and if i want to re-add that disk back to storage cluster how?

many thanks


Windows Search Committing Unusually High Number of Writes

$
0
0
We’ve noticed on our Terminal Server that the Windows Search service is periodically writing large amounts of data (sustained 600,000-1,000,000 B/sec). This was first noticed when our ShadowProtect backups began generating 2-5GB incremental files. We run them every 15 minutes and this is much larger than our typical backup. We tracked the problem down to Windows Search. 

This was noticed after rebooting the server. We restarted the WSearch service and things seemed to calm down for a while. Within a couple of days the problem reappeared. Since then we rebuilt the search index and configured to only index Outlook. Again, it seemed to run fine for a couple of days and then we noticed the large incremental backups caused by WSearch. 

Our Terminal Server is Server 2012 R2 which is virtualized on an ESXi host machine. We had Widows Search/Indexing up and running without issue long before this problem appeared. 

Rln

Hyper-V over SMB3 problem

$
0
0

Hello.

I have a problem with my Hyper-V cluster.
It is simply a failover cluster with Hyper-V role consisting of two nodes. It uses SOFS share for VM storage.

SOFS is run by second storage failover cluster dedicated solely for this role. Storage cluster consisting of two nodes and shared iSCSI storage, disks added as CSV and SOFS shares are on them.

All Hyper-V and SOFS cluster nodes have dedicated 2x10G interfaces, so SMB3 multichannel is in place.
- SMBv1 removed
- NETBIOS disabled
- TCP timestams enabled "netsh int tcp set global timestamps=enabled"
- Enabled TcpAckFrequency and TcpNoDelay REG_DWORD 1 in HKEY LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<SAN interface GUID>

Approximately every two weeks all VMs hang due to losing connection to SOFS share.

Symptoms:
- UNC address \\SOFS.INSIDE.LOCAL cannot be accessed from Hyper-V cluster nodes with error "The remote procedure failed and did not execute." https://i.imgur.com/ye69RKt.png
- SOFS share can be accessed by UNC address \\SOFS from Hyper-V cluster nodes
- SOFS share can be accessed directly by \\SOFS.INSIDE.LOCAL\SHARENAME from Hyper-V cluster nodes
- SOFS share can be accessed from any other servers by \\SOFS.INSIDE.LOCAL or \\SOFS

Known workaround: Reboot Hyper-V cluster nodes or only one of two nodes. Rebooting SOFS cluster nodes doesn't help.

OS: Windows Server 2016 everywhere, 2018-06 updates

Of course I can go back to directly connecting iSCSI storage to Hyper-V cluster, but in my case this dedicated SOFS storage cluster was in place to simplify Hyper-V and (in future) SQL cluster nodes setup. So I won't need to update storage array software on all cluster nodes (~20 nodes in future) when new version comes out and all storage array-host relationships will be only between two nodes and array for troubleshooting reasons.

I believe that problem is somewhere in SMB client-server relations.

I've already tried this in Hyper-V nodes "Set-SmbClientConfiguration -MaxCmds 32768" and on SOFS nodes "Set-SmbServerConfiguration -MaxThreadsPerQueue 64 -AsynchronousCredits 8192" but it didn't help. All other SMB settings are default.
From my point of view this setup looks pretty simple: Hyper-V running VMs with storage over SMB without any insane or special things.

Captured problem with procmon https://i.imgur.com/ewDDpL9.png
Captured problem with network monitor: https://i.imgur.com/gbVvrZm.png (with filter ProtocolName == "SMB2")
In this sample 10.10.10.101 - SOFS node #1 SAN interface 0  and 10.10.10.155 - HV node #5 SAN interface 0

Looks like problem in RPC over SMB communication via Server Service Remote Protocol (https://msdn.microsoft.com/en-us/library/dd303117.aspx) but I have no idea whats the problem there.

According to this blog post (https://blogs.technet.microsoft.com/josebda/2013/10/30/automatic-smb-scale-out-rebalancing-in-windows-server-2012-r2/) type of access of Hyper-V servers to SOFS share should be considered symmetric because both SOFS nodes identically connected to SAN via iSCSI but I see a lot of 30814 events logged with 1 second interval first stating that share type is asymmetric https://i.imgur.com/LJ425BN.png and second stating that it is symmetric https://i.imgur.com/MnfxtDQ.png .
I can't find any documentation (except that blog post) about this behavior, and how SOFS determines type of share (symmetric/asymmetric).

Also in SMB witness client eventlog I can see a lot of events "Witness registration has completed." and "Witness Client received a share move request".
This events looks related, but I can't investigate further inside this SMB interaction.

Yes, we have got support case opened (118072618661320) but I can't get any response for more than two weeks now.

Server unresponsive preventing RDP login

$
0
0

Hello,

we encountered a very strange behaviour of some members of Windows Failover Cluster.

Server: Windows 2008 R2 SP1

All of a sudden we could not login to two nodes via rdp. Further investagtion showed the following situation:

1. RDP login not possible, no login screen, just black,

2. Connecting to \\servername\c$ worked,

3. Connecting via computer management console worked partially, the console stopped working when we tried to open the
    services.
    The eventlog could be opened remotly and checked showing NO errors in the  eventlogs. No application errors nothing.

Cluster logs showed no abnormality.

Before rebooting the nodes the cluster resources worked but it was not possible to move resources to other nodes.

Only rebooting the nodes helped to solve the problem.

Any ideas?

Regards

Marcus Deubel

How to change the default location of Work Folders when using group policy

$
0
0

Hi,

I deployed Work folders in Windows 2012 R2,  push the Work folder URL by Group policy and do not want any user interacts, but the Work folder will  go to %userprofile%\Work Folders , I would like to change to another location, like c:\work folders , is there any way to do that?  the whole process will be automatically and controlled.

Thanks in advance!

Ucing.


coding

Storage Replica - high log volume read activity

$
0
0

We have some volumes with Storage Replica running between them. When writing data to the source volume it looks like the read activity from the source log is ~3 times larger than the actual data throughput. The effect becomes larger when individual changes to the fileshare are smaller.

Is there a minimum block size that SR uses regardless of the change? We can see this behaviour across  iSCSI drives in a stretch cluster as well as on local disks in server-to-server replication.


Viewing all 13565 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>