Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

NTFS special permission inherited from?

$
0
0

I need to fix a permission issue but i cant figure out how it was done.

looking at a restore of the folder under ntfs the permissions for one user is special permissions. THIS FOLDER ONLY.

then if i go under change permission this user has full rights grayed out, so that inherited. BUT the parent folder does not have her listed. I don't know what she is inheriting from.

she is part of a group at the parent folder. the share permission are authenticated users read/change.

She can open the share and see the docs but unable to open or save to the share. So the group isn't giving her modify but the group has everything but full


Recovery after ReFS events 133 + 513 (apparent data loss on dual parity)

$
0
0

Hi,
I have a single-node Windows server 2016 with a dual parity storage space, on which a bitlockered ReFS volume resides with enabled file integrity. This ReFS volume hosted/contained a ~17TB vhdx file with archive data since its setup half a year ago. This file has now suddenly been REMOVED by ReFS! More precisely, I see the following two events in system log:

  1. Microsoft-Windows-ReFS Event ID 133 (Error): The file system detected a checksum error and was not able to correct it. The name of the file or folder is "R:\Extended Data Archive@dParity.vhdx".
  2. immediately followed by Microsoft-Windows-ReFS Event ID 513 (Warning): The file system detected a corruption on a file. The filehas been removed from the file system namespace. The name of the file is "R:\Extended Data Archive@dParity.vhdx".

I have the following questions:

  1. Surly, ReFS did not kill the complete inner virtual hard disk file, just because some of its blocks' checksum was not correctable. I can also see from the volume's free space that it still must be somewhere, as 26TB are used on volume level, but only 7TB of files are visible. So, how can I access the corrupt vhdx file again for manual recovery of its internal files?
  2. If I understand dual parity correctly, at least two physical disks must have failed simultaneously for this to happen. I do not see any useful events in the system log regarding this. How can I get any clues, which of the physical disks in my array need to be replaced? (SMART is 100%. I plan to run extended SMART self tests on each physical disk, but only after data recovery. Still, Windows or ReFS might have logged some clues about the physical disks involved in this checksum error somewhere?)

Thanks!

FSRM and Quota's showing incorrect values

$
0
0

Hi,

I have set up folder quota's, and on one folder I have set it to 100GB.  This Quota shows as 99% used, but the actual folder in Explorer shows only as 28GB used.  What am I missing, or set up wrong, or is there a fix that needs to be applied? (I don't think my maths is that bad as I am pretty sure 28 is not 99% of 100.)

Can anyone help or point me in the right direction.

Thanks!

Ben 

Move files older than x year from File Server

$
0
0
Hi guys,

We have a file server with 15 Tera.
I need a command to help me at determine and move older files and folders older than x year.

This command must do :

- Show number of files older than x year in a folder and subfolder,
- Show total size of theses old files in gb
- export this result in CSV file

- And with additionnal function, move theses old files and folders with keeping the source path, in a folder that I chose

thanks you very very much !

Julien.

EFS Recovery Confusion

$
0
0

Hello, I was testing EFS recovery functionality and created a new agent/certificate for our domain admin account. Applied the certificate via GPO to all of our machines. All encrypted files then showed decryptable by the encrypting user and two recovery keys as expected (our default domain controller recovery cert and the new domain admin cert I just created to test).

I no longer have use for the test cert, so I revoked the cert and removed the entries from the cert manager as well as the GPO and updated the server's GPO. However, checking the cipher properties of encrypted files, it still lists the test cert under recovery agents. Should this still be there or is this expected functionality? Do machines continuously add to their recovery agent list when new ones are created without a way to remove them from the file cipher until the cert expires? I just tried encrypting a new test file on my user account and both recovery agent entries are still showing up.

Workfolders issues

$
0
0

Hi all,

Since the most recent official Windows 10 update (1803 April update) I'm experiencing some issues.

* In explorer, some pictures get thumbnails, some don't (as seen in the figure). Tried deleting the thumbnail cache but that doesn't help.

* The Photos app doesn't sync pictures from the WorkFolders anymore, and doens't show thumbnails anymore for videos.

* In exlporer the icons for 'Download state' aren't correct. Some folders have a 'Failed sync' icon, but all the files inside are downloaded correctly.

A clean install of the client operating system doesn't help. Anyone else experiencing these issues? Is a fix available soon?

Janjaap

robocopy ERROR 87 (0x00000057) Creating Destination Directory "directory path" The parameter is incorrect.

$
0
0

Hi,

I'm struggling with this error, launching the command:

robocopy "P:\netfolder" "C:\localfolder" /R:3 /S /W:3 /XO
where P: is a mapped network unit

I launch this command for several PCs, and it works for everyone of them, but for only one (ONE! argh!!) I get this error

.

robocopy ERROR 87 (0x00000057) Creating Destination Directory "C:\localfolder\netsubfolder\" The parameter is incorrect.

.

the issue is creating the folder on the destination location, because if I create it manually everything works!

Some suggestion?

thanks to everyone!






Anonymous access trials from SMB clients

$
0
0

Hello to Everyone,

we have a network share on a Server 2012 (not R2), which is used by many computers in the network (Win10 Pro in most cases). Each client can reach that share successfully, but there are a ton of errors (every day) in the Microsoft-Windows-SmbServer/Security log: Event ID: 1007, "The share denied anonymous access to the client."

There is a client who reach the C$ system share, and there is the same problem. For example, these parameters are shown:
Client Name: \\<IP address of client>
Client Address: <IP address of client>:<port number of client>
Share Name: \\*\C$
Share Path: \??\C:\

Sometimes a client generate 100-200 error rows in 2-3 seconds (continously), sometimes just 4 rows in a second.

Why they try to access anonymously though they have access to it?

Thank You in Advance:
 - Duke


Storage Spaces Slot numbering

$
0
0
I was wondering if there is any setup adjustments I can make to have the slot numbering of HDD's in file and storage services not follow the HDD to another physical slot and also properly re-assign the slot number to a replacement disk. So I am setting up a 36 bay storage server with Seagate ST6000NM0095 SAS drives connected via backplane and supermicro AOC-S3008L-L8e PCIe HBA SAS adapter. The issue I have is that if I move a drive from slot 5 to say slot 14, it shows up on connected to slot 5 still. That wouldn't seem too bad if not for the fact that if I pull a failed drive from any slot, the new HDD is not assigned a slot number, so I can't view the status of the array by slot numbers or re-assign slots to replacements drives, is there any settings for changing this behavior?

Storage Spaces Direct - NFS support?

$
0
0

I have been evaluating storage spaces direct as an option for persistent storage for containers running within Azure VMs.

Our developers said that trying to use an Azure Files SMB share resulted in terrible performance.

They have advised that using NFS is preferred.

Is it possible to use storage spaces direct with NFS?

I have not had much luck so far, if i try to add a general use file server role to the failover cluster it says there are no disks available.

Everything is fine with SMB and SOFS.

Thanks!





Unable to Select disk by/or Change Friedly name or UniqueId

$
0
0

So here's the thing, ALL my drives have the same friendly name and they do have random uniqueId's but powershell is unable to find the right uniqueId when presented with them.

=======================

PS C:\Users\Administrator> Get-PhysicalDisk | ft FriendlyName,CanPool,Size,MediaType,UniqueId

FriendlyName  CanPool          Size MediaType   UniqueId
------------  -------          ---- ---------   --------
XENSRC PVDISK    True 2147483648000 Unspecified XENSRC  639ce0ad-e343-4628-832a-32b2a3763727
XENSRC PVDISK    True   26843545600 Unspecified XENSRC  91ae49a2-39cf-4a1e-8d35-2552508fb5e0
XENSRC PVDISK    True   26843545600 Unspecified XENSRC  4e2422f4-26ed-49e2-a082-7a8243c56878
XENSRC PVDISK    True 2147483648000 Unspecified XENSRC  65ef5d15-c6f0-495e-a4cd-82ebdc4d50f4
XENSRC PVDISK   False   80530636800 Unspecified XENSRC  6c88d402-e218-4a5b-b1e4-faf295a34c11
XENSRC PVDISK    True   26843545600 Unspecified XENSRC  25089c59-ffbc-471d-aac1-2f5c8a8a80c3
XENSRC PVDISK    True 2147483648000 Unspecified XENSRC  c00156ee-60f4-42c7-83af-cb9555074c30


PS C:\Users\Administrator>

=======================

I want to set the correct media types so i can use the SSDs for caching, the 3x2TB drives will run in parity along with the 3x25GB ssd drives which will be a cache.

I did look up online so everyone suggests this: 

=======================

PS C:\Users\Administrator> Set-PhysicalDisk -UniqueId "{25089c59-ffbc-471d-aac1-2f5c8a8a80c3}" -MediaType  SSD
Set-PhysicalDisk : The requested object could not be found.
At line:1 char:1
+ Set-PhysicalDisk -UniqueId "{25089c59-ffbc-471d-aac1-2f5c8a8a80c3}" - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (PS_StorageCmdlets:ROOT/Microsoft/..._StorageCmdlets) [Set-PhysicalDisk], CimException
    + FullyQualifiedErrorId : MI RESULT 6,Set-PhysicalDisk

PS C:\Users\Administrator>

=======================

Doesn't work, so you think just copy the entire string as is:

=======================

PS C:\Users\Administrator> Set-PhysicalDisk -UniqueId "XENSRC  91ae49a2-39cf-4a1e-8d35-2552508fb5e0" -MediaType SSD
Set-PhysicalDisk : The requested object could not be found.
At line:1 char:1
+ Set-PhysicalDisk -UniqueId "XENSRC  91ae49a2-39cf-4a1e-8d35-2552508fb ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (PS_StorageCmdlets:ROOT/Microsoft/..._StorageCmdlets) [Set-PhysicalDisk], CimException
    + FullyQualifiedErrorId : MI RESULT 6,Set-PhysicalDisk

PS C:\Users\Administrator>

=======================

Nope, i've tried quotations, curly brackets, single quotations, and all this again being part of a pool and not being part of a pool. Nothing works.

OS i'm using is Windows Server 2016 Datacenter

Is it possible to give read write modify but not delete access on a shared folder to domain user

$
0
0

Dear All, 

I have face a difficulties with my File Share (Network Drive-server-2016) system. We are using active directory(Server-2016) and all users are under the domain.

We have implemented a File Share (Network Drive) system with custom rules of users. Basically we have setup 3 Rules, which is,

1. Only Read operation

2. Read and write operation. Delete is not permitted

3.  Full access (Read, Write and delete permission).

But problem is on number 2 item which is, read and write operation. Delete is not permitted. When users work on a .txt file and save the file its works but when the users work on Microsoft word or excel and after the work when they save the file, it shows a message that, Access denied. Contact your administrator. And the file is not save.

But when i change the rules and check mark on delete permission for those users it woks smoothly. That means i have to change the rules to number 2 to number 3. 

I don't know the reason. 

If any one help me that will be very much helpful for me. 

Thank you.

Regards,

Yasib Ahmed



How to disable / decrease MPIO cache

$
0
0

Hi, 

Testing out a new backup repository.

I have a QNAP nas with 12disks. It's currently connected to my aruba switch via 2x1Gbit links. My backup server is physical with 4x10k raid5, Win2012r2. The backup server also is also connected to the same switch with 2x1Gb links.

QNAP is a iscsi target, and MPIO has been enabled on the backup server.

When I copy a 50GB test file from backup to QNAP, the progress bar shows the speed of 350MB/s, which is impossible. After progress bar shows that the file is copied (after ~3mins), a good portion of the file is still in the memory of the backup server and both two nics are still transferring ~1Gbps for about two minutes after the file was supposed to be copied over. You can see the memory consumption first rising by 20GB, and then slowly lowering back to normal. In the receiving end the QNAP shows transfer speed of 200MB/s during whole time. 

Is there a some kind of built-in caching in MPIO? Is there way to disable, or significantly lower the amount of cache?


E: In devmgmt -> disk drives -> QNAP iSCSI Storage Multi-Path disk Device -> policies "Enable write caching on the device" is not checked.

You must be an administrator or have been given the appropriate privileges to view the auditing permissions of this object.

$
0
0

I feel silly even asking this, but I dont know why I can't over come it. 

The owner of this directory is the local admin, I have domain admin permissions AND my account has FULL permission to the directory, yet, I get this message when viewing the auditing tab. 

You must be an administrator or have been given the appropriate privileges to view the auditing permissions of this object. 

Why do I need to do this if I already have permissions to the directory?

Jim



Jim

Cannot access file shares after moving Domain Controller to Azure

$
0
0

I recently moved a domain controller VM to azure using ASR. After moving the VM I used Azure point to site VPN to connect to the Azure VNET. I was able to ping and RDP to the VM but could't access file shares. 

However after demoting  server on azure I was able to access file shares without a problem.

and was able to access file shares even after promoting server again.

Any idea what I need to configure to enable access for file shares without demoting domain controller ?


 


Janindu Nanayakkara



change dfs folder to new server

$
0
0
I am adding a new server and will be shutting down existing physical file server. I have replicated shares to 2 new vm file servers. I am now ready to redirect my dfs to point to the new servers. Should I add them as new shares and delete the existing share or is there a way to go in and change the path the dfs is using to point to the new server.

DFS changes

$
0
0
I am adding a new server and will be shutting down existing physical file server. I have replicated shares to 2 new vm file servers. I am now ready to redirect my dfs to point to the new servers. Should I add them as new shares and delete the existing share or is there a way to go in and change the path the dfs is using to point to the new server.

DFS Namespace 2016 not accessible on cross domain platform

$
0
0
DFS Namespace 2016 not accessible on cross domain platform

Set-PhysicalDisk -MediaType XXXX -UniqueId abcdefgh --- Reverts after restart.

$
0
0

Hey All,

In an attempt to change the media type of my different VHDs, I'm making this change within the VM.

Set-PhysicalDisk -MediaType HDD -UniqueId 6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA
Set-PhysicalDisk -MediaType SCM -UniqueId 6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxF
Set-PhysicalDisk -MediaType SSD -UniqueId 6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4

But after a restart its all lost and MediaType is switched back to Unspecified.

Any advise to keep this in place.

Thanks


yup

Super Storage Server

$
0
0

Hi,

We plan to use Windows Server 2016 on a datacenter for storing video.

The OS is Windows Server 2016 and 16 cores. We plan to use 24 drives 12TB drives for a total 288TB.

We have the following questions.

1) How much RAM memory do we need?

2) Since we have 24 drives. Letters A and B can't be used. Letter C is used by the solid state SSD drive. We want to use each drive with a character D,E,F....XYZ. How do we assign each drive to a single Letter in the OS since we have only 23 available letters?

Viewing all 13565 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>