Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

You don't currently have permission to access this folder (Access deny)

$
0
0

Hi,

I have two file servers with DFS role.

In source server (STOR01) - i can access to folder and disks. On new detestation (STOR02) server, I have a problem with same folders and disks, using same account (administrative).

If i try open some folders with custom permission, I receive message "You don't currently have permission to access this folder".

After press "Continue", my account add to that folders and I can access it.

And custom permissions are set for disk, I receive access deny.

Permission for disk is:

Same permissions on STOR01 server, but there no problems with access. Also from STOR01 server I can access this disk like \\stor02\g$, but from stor02 I receive, -  resource is not available error.

My account is domain admin and a member of stor02 local administrator group.

Please help me understand, what is the reason


Server 2012 R2-DFS Replication not working one direction-Insufficient Disk Space error but it's not

$
0
0

I have seen several posts on this issue and possible solutions, so far nothing helped in my case.

We have two servers S1 (Primary) and S2 connected in LAN. We have users folder in Replication Group. Replication Group's bandwidth is full. From S1 to S2, its working fine and backlog is very low which is normal. But S2 to S1, its stuck at 7779, an hour ago it was 7780. I have checked DFSR event logs, DFSR diagnostic reports etc.

In event log and also in the report, for S2 server, there is an error:

DFS Replication unable to replicate files for replicated folder Users due to insufficent disk space.  

  Affected replicated folders: Users 

  Description: The DFS Replication service was unable to replicate one or more files because adequate free space to replicate the files was not available for staging folder E:\Users\DfsrPrivate\Staging. Event ID: 4502 
  Last occurred: Wednesday, January 30, 2019 at 3:34:46 PM (GMT10:00) 

  Suggested action: Determine whether the volume reported hosts the replicated folder, the staging folder or both as in default configuration. See additional information about disk space under the informational section in the table titled "Current used and free disk space on volumes where replicated folders are stored". Ensure that enough free space is available on the volume for replication to proceed or move the associated replicated folder or staging folder to a different volume that has more free space.

Now, our S2-E: drive is 42.1TB and 28.3TB is free, S1-E: drive also has similar space. Users is the root shared-folder that contact individual users folders. Usually users files are not that big.

Users folder Stating Folder size has neber been a problem as I allocated sufficient space (200GB) for both servers. When I checked staging folder current size, on S2 its 4.49GB only, on S1 its 146GB.

When I run " Dfsrdiag.exe ReplicationState" on S2, it gives me this:

dfsrdiag.exe ReplicationState /member:S2

  Total number of inbound updates scheduled: 88

Summary

  Active inbound connections: 1
  Updates received: 120

  Active outbound connections: 0
  Updates sent out: 0

Operation Succeeded
For S1,
dfsrdiag.exe ReplicationState /member:S1

  Total number of outbound updates being served: 15

Summary

  Active inbound connections: 0
  Updates received: 0

  Active outbound connections: 1
  Updates sent out: 15

Operation Succeeded
Just a week ago, S2's replication service got into issue and it had to rebuild its database, then did an initial replication that took around 2-3 days to complete. Since then replication service is running fine. The most recent event log that catches my eye after S2 was last rebooted is drive E: free-space issue (Event ID 4502). Right before that there is another entry 5014
Text
The DFS Replication service is stopping communication with partner S1 for replication group RG01 due to an error. The service will retry the connection periodically. 
Additional Information: 
Error: 1818 (The remote procedure call was cancelled.) 
Connection ID: 257B85DC-8C09-42EF-9727-4176A2F88527 
Replication Group ID: 158FE127-1927-463F-88CC-70E6B0014656
This is I have, and I am in a loop in finding our out what is responsible for S2 not replicating or very very slow replicating to S1. Any advice/help will be appreciated. Thank you.

Storage Spaces Direct Windows 2019 - adding disks to SSD cache

$
0
0

Building up a number of lab environments and I have a solution with the following:-

1xSSD - Cache

8xHDD - Capacity

I want to test how to increase the cache by adding 3 extra SSD per server (same hardware model) but storage spaces appears to automatically allocate these straight to the capacity pool.

Is there a mechanism to force these SSD to the cache?

Share Dirve Audit log

$
0
0

Dear Folks,

I have create the Windows fail over cluster and add file server role.

Now I want to enable audit log for perticular on share drive to monitor who's deleting or adding files on that share drive.

Auditing is already enabled but file delete log it's not generated, Please suggest if I doing or missing any setting.

Thanks

Yogesh.

Failed to access Work Folder from Shadow Copy Volume after windows 1803

$
0
0
My Avamar backup failed to access Work Folder in windows 10 (version is 1803. 1809 has the same issue).

Work Folder can be accessed smoothly in normal environment. Only when I create Shadow Copy Volume, backup process(avtar) failed to touch it by following error.

And if I launch backup process in command line, the backup work. The only different of processes for launched user is SYSTEM or administrator.

So looks like process in administrator can access Work Folder from VSS.

process in system can’t access Work Folder from VSS.

VSS path:
\\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy11\Users\fuc4\Work Folders
API:
FindFirstFile
GetFileAttributesExW
CreateFileW
Errors:
19 : The media is write protected.

As I know Windows limited some system process to access network.

But for Work Folder, the behavior has never been this before window 1803.

Is there any body know how to resolve this kind of issue?


Creating a virtual disk for SOFS

$
0
0

I am struggling to create a virtual disk for SOFS on server 2016

I have 3 Jbods, each with 3 SSDs and i want to create a 3 column 2 data copy.

However every time i try to create this it fails. It tells me i have the wrong disk setup for resiliency type i want. However why would 3 SSDs per JBOD work?

I can create a mirror with 1,2,3 or 4 columns as long as i dont set enclosure awareness. However as soon as i try to enable enclosure awareness it gives me the error message.

Any help with how this could be set up or what i am doing wrong is appreciated.

Server 2016 Previous Versions last few days not visible

$
0
0

Hi All,

On several 2016 file servers we have seen that the last few days of the previous versions are not visible. In the shadow Copies tab all snapshots are visible, but in the Previous Versions tab the last few days are not visible. Someone any idea?

Good to know, we have maximized the number of VSS snapshot at 512 via the well-known DWORD:

[HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ VSS \ Settings]
"MaxShadowCopies" = dword: 00000200

And we have configured the VSS snapshots on a separate disk.

Thanks!

Top solution for troubleshooting common issue on S2D

$
0
0

Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features such as caching, storage tiers, and erasure coding, together with the latest hardware innovations such as RDMA networking and NVMe drives, deliver unrivaled efficiency and performance. 

 

In this section, you will learn the states that can be invaluable when trying to troubleshoot various issues, way to troubleshoot your Storage Spaces Direct deployment and frequently asked questions related to Storage Spaces Direct. 

 


Please remember to mark the replies as answers if they help.
If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com.


When we turn off old AD server one user does not have access to files on file server

$
0
0

We have an old AD server 2003 that we are retiring for a Windows server 2016. When we turn off the old 2003 server one user  not get to files on the file server (another 2003 server) 

They can ping the file server but does not have permissions to access it even though they do have permissions.



GPO that is used to create folder on shared drive and desktop icon is not working anymore

$
0
0

I have a GPO that when a user logs on to a domain PC a folder named "%USERNAME% is created and a shortcut is placed on the desktop referring to that location.

This was working file but we had an issue with some security permissions and now the shortcut is not being created and neither is the folder on the shared drive. I have verified that the new users can write to the shared folder and create a new folder manually. I have system and Domain users having full control permissions of the shared folder and have attached the settings for the GPO. I have also checked GPRESULT /R and ensured that the GPO is being applied to the machine.


Jeremy Robertson Network Admin


Deduplication Problems | Garbage Collection hangs on 0%

$
0
0

Hello, 

Since a couple of months we have a problem with deduplication on our live production.
We have two servers (FS01 and FS02). We use DFS to replicate files between the servers. 
One server located at the office and another server located on the seccond office. 
We gott a backlog of DFS from 1 000 000 + so we decided to move the server on the same network to sync the files. 

When we start garbage collection on FS02 it works well and its finished after 4 - 6 hours. 
When we start garbage collection manual on FS01 with high priority the job start. 
Now after 3 days when i check the status its still hangs on 0%.
How we can solve this problem ? 
We already fully update the server and restarted before we start the garbage collection.

If you need any more information let me know. 



Enabling EFS on FileStream Folders

$
0
0

Please help me on below:

Below is my Environment:

Windows Server 2012 R2 Standard

SQL Server 2014 SP2 GDR

Availability Groups 2014 with 3 Node (2 Sync and 1 BCP async)

I have 7 Databases with FileStream enabled.

Each 7 DB Filestream data folders is around 500 GB.

Due to security policy I need to enable both Transparent Data Encryption (TDE for structured data) and Encrypting File System (EFS on FileStream folders).

While enabling EFS on FileStream folders I am getting below error.

(NOTE : I am doing it by turning the SQL Servers offline and before taking services offline , I am failing over AG to next available Synchronized AG Node)

I cannot Ignore the error and move on , because of which please advise on below:

I even tried turn off anti virus and Firewall , still no luck

  1. What is the root cause of this issue and how can I perfectly enable EFS for the 7 DBs FS Folders .
  2. Can I try enabling the EFS on multiple DB FS Folders keeping in mind their sizes (500 GB each) ?

Kindly advise. Thanks


Best Regards,SQLBoy


you do not have permission to access \\server

$
0
0

Windows 7 fully up to date. When attempting to use file manager to access \\server across the (peer workgroup) network, I get the error message above. However, when I have applications with default opening path to \\server, these applications can access data on \\server, albeit with slow response times.

I have tried various fixes in this forum, but without success.

Can anyone please help?

Many thanks

Derrick Price

DFSR Health Reports - root files can't be replicated and don't exist

$
0
0

I'm trying to understand the output of this DFSR report. It's saying certain files can't be replicated in the root. When I look at the root of the folder and show hidden and system files I do not see the files listed in the report. Here is one example. I have another DFSR relationship that shows many more instances. Any ideas what causes this?

Suggestions - Enabling EFS on FileStream Data for SQL Server Database

$
0
0

Anyone if already done enabling the EFS on FileStream data around multiple Databases around 500 GB+ , please share me your thoughts on feasibility and various activities planned around.

We are in process of enabling EFS for PROD. We are testing in lower environments too. Your inputs help us .

Which one is better ? Enabling Parallel EFS (or) or one FileStream Folder at a time ?

Please advise . Thanks


Best Regards,SQLBoy


SMB share vs 'normal' share

$
0
0

In Windows 2016 via Server Manager -> file and storage services -> shares, you have the option to choose SMB share - Quick/Advanced/Application.

If you right click a folder, go to properties and then create a share you don't have these options.

So what kind of share is created if you create a share via the folder options ?

Optimize-Volume -retrim is not giving me my space back!

$
0
0

Hello, I have a Server 2012 R2 failover cluster with two hosts. I have a dynamically expanding VHDX attached to my file server. The current filesize is 3,017,413,632KB (2.8TB). I have moved lots of things off this volume so the actual space used on the volume inside the VM says 2.22TB. I have ran the following command:

Optimize-Volume -DriveLetter D -ReTrim -Verbose

It then seems like it goes through the process with no errors. Usually when I do this on other virtual machines, once I shut down the VM and power it back on - the size of the VHDX on the host usually shrinks down to the expected size. But, I've tried this 3 or 4 times and the size doesn't seem to be going down?

Here's my output from the command:

PS C:\Users\administrator> Optimize-Volume -DriveLetter D -ReTrim -Verbose
VERBOSE: Invoking retrim on Data (D:)...
VERBOSE: Performing pass 1:
VERBOSE: Retrim:  0% complete...
VERBOSE: Retrim:  1% complete...
VERBOSE: Retrim:  2% complete...
VERBOSE: Retrim:  3% complete...
VERBOSE: Retrim:  4% complete...
VERBOSE: Retrim:  5% complete...
VERBOSE: Retrim:  6% complete...
VERBOSE: Retrim:  7% complete...
VERBOSE: Retrim:  8% complete...
VERBOSE: Retrim:  9% complete...
VERBOSE: Retrim:  10% complete...
VERBOSE: Retrim:  11% complete...
VERBOSE: Retrim:  15% complete...
VERBOSE: Retrim:  17% complete...
VERBOSE: Retrim:  18% complete...
VERBOSE: Retrim:  19% complete...
VERBOSE: Retrim:  20% complete...
VERBOSE: Retrim:  21% complete...
VERBOSE: Retrim:  22% complete...
VERBOSE: Retrim:  23% complete...
VERBOSE: Retrim:  27% complete...
VERBOSE: Retrim:  39% complete...
VERBOSE: Retrim:  50% complete...
VERBOSE: Retrim:  51% complete...
VERBOSE: Retrim:  52% complete...
VERBOSE: Retrim:  53% complete...
VERBOSE: Retrim:  54% complete...
VERBOSE: Retrim:  55% complete...
VERBOSE: Retrim:  56% complete...
VERBOSE: Retrim:  57% complete...
VERBOSE: Retrim:  58% complete...
VERBOSE: Retrim:  59% complete...
VERBOSE: Retrim:  61% complete...
VERBOSE: Retrim:  63% complete...
VERBOSE: Retrim:  64% complete...
VERBOSE: Retrim:  65% complete...
VERBOSE: Retrim:  66% complete...
VERBOSE: Retrim:  67% complete...
VERBOSE: Retrim:  100% complete.
VERBOSE:
Post Defragmentation Report:
VERBOSE:
 Volume Information:
VERBOSE:   Volume size                 = 3.97 TB
VERBOSE:   Cluster size                = 4 KB
VERBOSE:   Used space                  = 2.22 TB
VERBOSE:   Free space                  = 1.75 TB
VERBOSE:
 Allocation Units:
VERBOSE:   Slab count                  = 130170
VERBOSE:   Slab size                   = 32 MB
VERBOSE:   Slab alignment              = 31.00 MB
VERBOSE:   In-use slabs                = 80471
VERBOSE:
 Retrim:
VERBOSE:   Backed allocations          = 92287
VERBOSE:   Allocations trimmed         = 11887
VERBOSE:   Total space trimmed         = 371.46 GB

Any ideas? I need to shrink this volume urgently as I'm running out of space on the host!

FYI the storage is a HP P2000 MSA G3 SAS storage array. I have used this command previously on the same cluster on a different virtual machine with a test VHDX that I've removed data from and ran the command - and the size of the VHDX shrunk as expected.

Thanks

Lee

Server 2016 Task Scheduler not working?

$
0
0

I'm trying to use Task Scheduler in Server 2016 to upload files to a vendors AWS S3 bucket. Nothing has worked using Task Scheduler.

If I run the .bat script on it's own it works just fine and the files are uploaded, but not in Task Scheduler.

If I run the same commands in a Powershell script on it's own it runs just fine and the files are uploaded, but not in Task Scheduler. 

Task Scheduler history logs says, "Task completed" as you can see from the image I attached.

Settings I have in Task Scheduler:

  • Run whether user is logged on or not
  • Run with highest privileges
  • Configure for Windows Server 2016
  • Repeat task every 5 minutes for a duration of Indefinitely
  • Enabled
  • Action: Run a Program
  • Program/script "C:\scripts\name_of_file.bat
  • Star in c:\scripts

Does Server 2016 have some kind of safety feature that I need to enable for this to work in Task Scheduler?

I am a Domain Admin.


Work Folders performance per user

$
0
0

Hello,

Work Folders are causing abnormally big network throughput.

I think there are probably several users who could cause that.

How can I measure performance of Work Folders per user ? To see for which users are having the most traffic ?


Work Folders on-demand access

$
0
0

Hello Guys,

I have configured work folders in our company but I'm facing some strange behavior. 

On client computer Windows 10 1803 x64 I have enabled GPO setting GhostingPolicy to UserChoice:



When a user clears the checkbox as shown on this screenshot, it's working until next login to the computer.

After users logs in, the checkbox is enabled again itself.

Is this a bug or do you have any information on how to prevent this?

Thanks

Matej

Viewing all 13565 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>