Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Storage Spaces Virtual Disks missing Provisioning and ResiliencySettingName

$
0
0

Dear all.

I am using Server 2016 Datacenter together with Storage Spaces to handle my 6x4TB drives + 2x233GB SSDs.

I want to create 3 virtual disks on that single pool.(to use the SSD for tiers and cache as Parity without ssd cache is slow as shit)

  • Mirror Tier for important daily userdata (SSD Space + HDD Space)
  • Parity Tier with SSD WriteBackCache  (Movies, Music) on HDDs
  • Mirror on the SSDs for VMs on SSDs only

So far I used the following commands:

  • created a single Pool with the GUI
  • created an SSDTier and an HDDTier

C:\Users\Administrator> Get-StoragePool MyStoragePool | New-StorageTier –FriendlyName SSDTier –MediaType SSD

C:\Users\Administrator> Get-StoragePool MyStoragePool | New-StorageTier –FriendlyName HDDTier –MediaType HDD

  • set Mirror Colums to 1
C:\Users\Administrator> Get-StoragePool MyStoragePool | Set-ResiliencySetting -Name Mirror -NumberOfColumnsDefault 1

  • set variables

C:\Users\Administrator> $SSD = Get-StorageTier -FriendlyName SSDTier

C:\Users\Administrator> $HDD = Get-StorageTier -FriendlyName HDDTier

  • Created a Storage Tier with 50 GB SSD and 4000 GB HDD Space
C:\Users\Administrator> Get-StoragePool MyStoragePool | New-VirtualDisk -FriendlyName MirrorTier -ResiliencySettingName Mirror -ProvisioningType Fixed -StorageTiers $SSD, $HDD -StorageTierSizes 50GB, 4000GB –WriteCacheSize 5GB

  • Created the Tier for SSD only Mirror
C:\Users\Administrator> $vd1 = New-VirtualDisk -StoragePoolFriendlyName MyStoragePool -FriendlyName Mirror -StorageTiers @($SSD) -StorageTierSizes @(175GB) -ResiliencySettingName Mirror -WriteCacheSize 0GB

  • Created the Parity VDisk with Size close to max and 1 GB WBC
C:\Users\Administrator> $vd1 = New-VirtualDisk -StoragePoolFriendlyName MyStoragePool -FriendlyName Parity -StorageTiers @($HDD) -StorageTierSizes @(12259366260000) -ResiliencySettingName Parity -WriteCacheSize 1GB


Any how Provisioning and ResiliencySettingName is missing.

Is my setup correct? Any ideas?





You need permission to perform this action

$
0
0

hi everyone,

Am having issues working on my file share server where i have a shared drive with folders in ti for each department.

users are able to access the folders based permission assinged

but the problem here is that they are not able to create ,copy or perform any task on the folders

even as admin i cant paste any document

please i need urgent help?

thanks

kingsley

Storage Spaces out of box parity volume causes timeout when copying

$
0
0

Dear all,

I just created a parity over 6x4 TB in a pool which doesn't contain SSD. Blank Parity volume fixed on a LSI controller Xeon D board + Server 2016.

Once I copy large movies I easily reach 112 MB/s from Client to Server for 1 Minute. I can see the memory load in the server (64GB) slightly increasing, ~100 MB/s so it seems like Microsoft caches the received data to memory. That is done for about 5 GB -> then the transfer stucks at 0MB/s and the cache is getting written to disk.

Any how, that takes too long so the client gets this:

If i click retry after a minute it starts copying at full speed again and a minute later the same issue again.

What the heck is wrong with MS Storage Spaces implementation. I dunno. Even if I try it with journaling SSDs it might work so far so good and the speed seems to be okay enough for the client not to run in timeout but its far from good.

I can create the pool randomly over 3 to 6 disk in my system and it behaves the same. Timeout on client side.

Ideas? Thanks a lot!




Collecting data usage on domain PCs

$
0
0

Happy  new  Year  to all, 

I am  trying to  find a  script, and  I am  thinking it  is a  logon  script  to collect  two  pieces  of  information ,  the  size  of the  hard drive  on a domain PC and  the  number  of space  used by  data files  on the PC.  Basically  I need to  know  if  users are  using the  file server  or  keeping  all the  data  on their domain PC. 

Possible  ?

Windows 2008 R2 Client for NFS mount fails using AD

$
0
0

Hi,

I have a Windows 2008 R2 AD and client system. As far as I know, I know very little about Windows, I've set up everything correctly. When I log in to the client with my AD credentials I don't have any issues. But I'm trying to access and NFS export from a Linux system that is using the same AD server to authenticate users to access that export. I can mount the export just fine from a Linux system but when I try to mount it on the Windows 2008 R2 client I always get: Permission denied. I looked at a network trace and it appears to be failing doing a GETATTR right after the mount succeeds. The reply to the GETATTR says: denied and the guy looking at the error on the Linux system is saying that the GSS context for the GETATTR is empty. 

How would I troubleshoot this issue? The documentation on using "Client for NFS" with AD is either non-existent or I just can't seem to find it.

Thanks,

Rob

Shared Virtual Hard Disk - what is it good for?

$
0
0

Since you can't (https://technet.microsoft.com/en-us/library/dn281956.asp)...

    • Resizing
    • Migrating
    • Backing up or making replicas

Are there other ways to address these same 3 needs outside of the guest cluster itself? Those seem like pretty huge drawbacks. I assume 'making replicas' refers to incompatibility with hyper-v replica, but what specifically is 'backing up' referring to...VSS?

Was the intention behind this pretty much -> "You don't have / can't get enough hardware for new cluster(s) AKA SOFS, so here's a way to do it as a guest cluster to get High Availability but with these pretty major drawbacks." Even so, I'm pretty much in that boat right now: I can't get any additional hardware right now and I need somewhere - in pretty short order - to move a bunch of data since the old hardware is being retired.


    born to learn!


    The create operation stopped after reaching a symbolic link

    $
    0
    0

    I recently deployed symlinks in a DFS file share deployment (symlink /D). I started receiving "The create operation stopped after reaching a symbolic link" on some client machines. I am not able to find any information on what this error actually means or what should be done to remedy it. Does anybody have any suggestions on possible solutions or additional troubleshooting steps I should take?

    The create operation stopped after reaching a symbolic link

    Environment:

    • Windows 7 x64 Clients
    • Windows Server 2008 x86 and Windows Server 2008 R2 file share using DFS folder Targets (2) and DFSr replication (2) (although the error started before I created additional folder targets, but after I enabled folder replication.)
    • I've only been able to reproduce the error with a few clients connecting to the Windows Server 2008 x86 Server.
    • The connection is over a WAN link

    Regards

    Work Folders - recent security updates leave it broken

    $
    0
    0

    We have a small Work Folders implementation for a few clients.

    Until recently, the function has been working great, no problems.

    However, we have confirmed that two recent Windows10 patches leave Work Folders broken and completely unusable for clients who have either of the patches installed.

    KB3185614 (security rollup for Win10 build 1511)

    KB3189866 (security rollup for Win10 build 1607)

    If a Win10 client has either of these updates applied, they cannot use Work Folders.  The client attempts to connect to the Work Folders Server and gets the error:

    "there was a problem finding your Work Folders server", error code 0x80072f76.

    Work Folders server is fully patched.

    Removing the patch will allow the client to connect and use Work Folders successfully, but as we know Windows10 will just reinstall that patch again in a few days.  We are using the Microsoft "driver update" prevention tool for now to block the patch on affected machines.

    We are opening a support case with Microsoft soon.


    Windows 2003 iScsi connection hangs Server

    $
    0
    0

    Hello to all

    Hope you guys are doing well.  I have a Windows 2003 R2 SP2 this server has installed iScsi software and configured with default settings. When I first connected de Target it went thru the process of detecting the disk and formatted (basic not dynamic) moved my data to  that drive and went into production, 25 days later need it to reboot for maintenance purposes.

    Now the server hangs on applying computer settings.  The problem is the ISCSI software when it connects to the target it hangs the server.

    I set the iscsi service to manual and server does not hang.  I can login into the server, start the iscsi service go to the target and login,  once it connects to the target,  the server will not connect the drive, the disk management stops responding and from there on everything that has to do with managing the server hangs.

    I tried everything I could think off.  The data in the NAS is there and I connected from a different machine to make sure its there and connects fine.

    Any ideas on what could be causing the server  to hang when the iscsi connects to the target?

    Thanks for any help in advanced.

    TF


    Windows Server 2016 RTM -Storage Pool Virtual Disk Tiers Mirrored using Powershell results in Layout: Empty and Provisioning: Unknown

    $
    0
    0

    With 2 * SSD 250Gb and 2 * HDD 2TB

    using the following powershell commands:-

    $PhysicalDisks = Get-StorageSubSystem -FriendlyName "Windows Storage*" | Get-PhysicalDisk -CanPool $true
    New-StoragePool -FriendlyName "CompanyData" -StorageSubsystemFriendlyName "Windows Storage*" -PhysicalDisks $PhysicalDisks -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -WriteCacheSizeDefault 5GB
    New-StorageTier -MediaType HDD -StoragePoolFriendlyName CompanyData -FriendlyName HDD_Tier
    New-StorageTier -MediaType SSD -StoragePoolFriendlyName CompanyData -FriendlyName SSD_Tier
    $SSD = Get-StorageTier -FriendlyName *SSD*
    $HDD = Get-StorageTier -FriendlyName *HDD*
    New-VirtualDisk -FriendlyName "UserData01" -StoragePoolFriendlyName CompanyData -ResiliencySettingName Mirror –StorageTiers $SSD, $HDD -StorageTierSizes 180GB, 1TB

    then:-

    get-virtualdisk | FL *

    Usage                             : Other
    NameFormat                        : 
    OperationalStatus                 : OK
    HealthStatus                      : Healthy
    ProvisioningType                  : 
    AllocationUnitSize                : 
    MediaType                         : 
    ParityLayout                      : 
    Access                            : Read/Write
    UniqueIdFormat                    : Vendor Specific
    DetachedReason                    : None
    WriteCacheSize                    : 5368709120
    FaultDomainAwareness              : 
    ColumnIsolation                   : 
    ObjectId                          : {1}\\SOMEPC\root/Microsoft/Windows/Storage/Providers_v2\SPACES_VirtualDisk.ObjectId="{2cb0c12b-65ab-11e6-80b4-806e6f6e6963}:VD:{9a3c2324-74da-470c-ab6
                                        d-139859cd6ebb}{97191edf-d131-4a08-aba8-f39b426af22f}"
    PassThroughClass                  : 
    PassThroughIds                    : 
    PassThroughNamespace              : 
    PassThroughServer                 : 
    UniqueId                          : DF1E199731D1084AABA8F39B426AF22F
    AllocatedSize                     : 1303522574336
    FootprintOnPool                   : 2617782566912
    FriendlyName                      : UserData01
    Interleave                        : 
    IsDeduplicationEnabled            : False
    IsEnclosureAware                  : 
    IsManualAttach                    : False
    IsSnapshot                        : False
    IsTiered                          : True
    LogicalSectorSize                 : 512
    Name                              : 
    NumberOfAvailableCopies           : 
    NumberOfColumns                   : 
    NumberOfDataCopies                : 
    NumberOfGroups                    : 
    OtherOperationalStatusDescription : 
    OtherUsageDescription             : 
    PhysicalDiskRedundancy            : 
    PhysicalSectorSize                : 4096
    ReadCacheSize                     : 0
    RequestNoSinglePointOfFailure     : False
    ResiliencySettingName             : 
    Size                              : 1303522574336
    UniqueIdFormatDescription         : 

    Can anyone tell me if this is a bug? If the disks are mirrored? What I did wrong?

    VSS error 8193

    $
    0
    0
    I am getting three VSS errors when WS2008 backup executes.  I have no idea where to begin to look for the problem.
    The error (Event 8193) is:
    Volume Shadow Copy Service error: Unexpected error calling routine ConvertStringSidToSid.  hr = 0x80070539.

    Operation:
       OnIdentify event
       Gathering Writer Data

    Context:
       Execution Context: Shadow Copy Optimization Writer
       Writer Class Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Name: Shadow Copy Optimization Writer
       Writer Instance ID: {b7e9a7ca-db70-47c8-922b-0fa15f064b10}

    The other two are duplications of this error.
    Can anyone help me find a solution to this error?
    Here is a snapshot of the Application Events when the backup executes.

    Information    12/16/2008 2:27:58 AM    VSS    8224    None
    Information    12/16/2008 2:25:00 AM    Backup    754    None
    Error    12/16/2008 2:24:55 AM    VSS    8193    None
    Information    12/16/2008 2:00:23 AM    ESENT    103    General
    Information    12/16/2008 2:00:23 AM    ESENT    302    Logging/Recovery
    Information    12/16/2008 2:00:22 AM    ESENT    301    Logging/Recovery
    Information    12/16/2008 2:00:21 AM    ESENT    300    Logging/Recovery
    Information    12/16/2008 2:00:20 AM    ESENT    102    General
    Information    12/16/2008 2:00:17 AM    ESENT    2006    ShadowCopy
    Information    12/16/2008 2:00:17 AM    ESENT    2003    ShadowCopy
    Information    12/16/2008 2:00:17 AM    ESENT    2006    ShadowCopy
    Information    12/16/2008 2:00:17 AM    ESENT    2006    ShadowCopy
    Information    12/16/2008 2:00:17 AM    ESENT    2003    ShadowCopy
    Information    12/16/2008 2:00:17 AM    ESENT    2003    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2001    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2001    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2001    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2001    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2005    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2005    ShadowCopy
    Information    12/16/2008 2:00:15 AM    ESENT    2005    ShadowCopy
    Error    12/16/2008 2:00:11 AM    VSS    8193    None
    Error    12/16/2008 2:00:10 AM    VSS    8193    None
    Information    12/16/2008 2:00:00 AM    Backup    753    None

    DFS-R Backlog Appears Stuck

    $
    0
    0

    I inherited 2 DFS-R hosts connected via a WAN.  We'll call them Michigan and California.  We have several replication groups, one of which is massive, in the realm of 1.4million files and 1.6 Terabytes after deduplication.  About 2 weeks ago, all of a sudden we had a 1.4 million file back log.  For whatever reason, that backlog jumped back up to 1.4 million at least once since then.  Fast forward to earlier this week and the backlog of files being sent from California to Detroit is ZERO. However, the backlog of files being sent from Detroit to California is seemingly stuck at right around 80,000 and growing (as users continue to make changes).  For the life of me, I can't figure out what's the hold up.  The DFSR logs are pretty much greek to me.  I've seen suggestions to disable membership of the backlogged node, wait for the changes to replicate in AD and get picked up by the member servers, then re-enable it, to kick off an initial sync.  The problem is, I'm under the impression if I do that I'll lose data that hasn't replicated from Detroit to California yet, since California would become the "master".  Sure I can run a preemptive backup, but there's still a chance that someone will change something while that 18 hour backup runs.

    Are there any ideas of what I can do.  I'd love to narrow this down to figure out what exactly is the hold up.  I'm at my wit's end with this thing.

    Long term, the plan is to replace this current DFS solution with something else or to at least prune it down and/or break it into smaller parts.  However, I'm stuck with what I've got for the moment.

    Any help would be GREATLY appreciated.

    Access to fileshare with specific IP

    $
    0
    0

    Hi,

    I have a below request from customer. Here is what have been done so far. Please help. I know there will be some / lot of confusions. Let me know if more info is needed.

    There is a windows 2003 cluster(two node cluster "server1" and "server2") which  has two file share cluster groups "cluster group 1" and "cluster group 2" with each having single disk assigned to them "disk1" and "disk2" respectively. There is a third non-cluster server "server 3" (windows 2012 R2). This server has 3 drives C:, D: and E: drive.

    The data from "disk1" of "cluster group 1" has been copied to D: drive on third server using robocopy. Similarly data from "disk2" of "cluster group 2" has been copied to E: drive on third server.

    Customer wants to stop above cluster groups. Rename the third server to "server1" and change IP (use the IP of "cluster group1"). Add additional IP (IP of "cluster group 2") on the same NIC on third server. Now customer wants to know if there is any possibility to restrict or bind these IP addresses to D: and E: drives respectively. In other words, if users access\\<first IP>, he / she should see only D: drive data. If user accesses\\<second IP>, user should see only E: drive data. This should in other words behave like cluster share access how it used to be earlier.

    Is there any way to achieve this? Please help.

    -Umesh.S.K

    How to enable share folder password

    $
    0
    0
    i created share folder in file server and give the permission to the users to access their folder. I have multiple folder in file server when user clicked there folder he will have access but he is only have access of that folder in his pc only if he want to access folder from another computer he didn't have access from other login or other location.I want to give share folder password protected so that any user can have access of that folder drom any location using their ID or password.my all pc are in active directory...please help to over from this problem.. 

    seeding and the PreExisting foler

    $
    0
    0

    Hello

    Can some one please clarify the following for me as I have read different information on the same question and am therefore not sure of the correct answer.

    Windows 2003 R2 DFS

    I am going to add a new member to the an existing DFS share (i.e. new namespace server and new replica member)

    Therefore I take a NT Backup of the data on the existing member, I then restore this data to the relevant folder on what is to become the new member (i.e. pre-seeding), the idea being not all the data has to be synced across the WAN.

    Now I have read the following two accounts of what happens when I add the new member

    1: Any preexisting data in the folder (i.e. the data I pre-seeded earlier)  will be moved the the PreExisting folder (thereby leaving the shared folder temporarily empty)

    2: DFS will then compare the hashes of the files on the existing member to the files hashed of the files in the preexisting folder and if they are the same it will move the files back to the normal DFS shared folder (i.e. where it was originally moved them out from) possibly via the staging folder

    The above makes sense to me (not sure if the data goes back via the staging folder though?)

    I have also read the following

    1b: As above

    2b: DFS never touches these files again (files in the preexistence folder)  i.e. leaves them in preexisting and does not move them back. Rather DFS will get a copy of all files from the existing member as it is considered "Authoritative" (i.e. primary member) until the full initial sync has been completed from the primary member to this new member (at which point the existing member is not longer authoritative for this particular relationship)

    if 2b above is correct? this would make a nonsense of the per-seeding work,

    Can someone please tell me which (if any) of the above behaviors are correct please, when adding a new member with pre-seeded data from a current member

    Thanks all

    JoB333

    x1!


    DFSR 2008 Preexisting folder cleanup

    $
    0
    0
    I have multiple sites that replicate P: drives and the remote sites seem to have move most files to the Preexisting folder and the PreexistingManfest.xml has on about 20 files listed out of thousands.  So I am going to have to do a manual cleanup process.  What I am having a problem with is deleting files from the preexisting folder E:\System Volume Information\DFSR\Private\Guid\Preexisting.  I have tried to modify security and to Full Control and remove Read Only attributes but I am unable to cleanup any of these files.  Any help would be appreciated.

    DFSR: Is it ok to empty the PreExisting folders after initial sync is complete and data is verified?

    $
    0
    0
     We have pre-seeded hundreds of GB of data on our hub server, and several GB of data ended up in the 'PreExisting' folder for each replicated folder.  Initial sync is complete for all of them and I want to verify it's ok to completely remove the contents of the Preexisting folder (leaving the Preexisting folder there, of course).  Also, I did try deleting one folder from it and explorer doesn't seem to respond.  Perhaps it's just a resource issue on a very busy DFSR hub server?  Is there a WMI method for pruning the Preexisting contents like there is for the conflictanddeleted folders?

    Reading from HDD slower in system mode than in Administrator mode

    $
    0
    0

    Hello,

    I have developed an application (server) that reads files (there are hundreds of files of 300 kb each) from the hard disk to send them by local network (1Gb) to a client application that I have also developed.

    I noticed that if the server application runs in system mode (system session) (this is the case that interests me) the sending is much slower than if it is in administrator mode (administrator session)

    I have located the delay when reading the hard disk (without cache, if the files are cached, it is the same speed).

    I did the tests under Windows Server 2012 R2, and Windows Server 2012.

    Do you have any idea about this difference? If there is a difference between system session and administrator session concerning the access to resources, HDD ...? And if there is a solution to make the application as fast in system mode as in administrator mode.

    Thank you

    DFS - Adding Folder Target getting Access Denied

    $
    0
    0
    Trying to add a Folder Target (Not a NameSpace) to an existing DFS Root. I'm getting 'Access Denied'. I want to clear, I can access the folder on that server doing a \\... The problem I believe is that I have an existing Folder Target from a server that is long gone. Can that cause 'Access Denied'? I already did a DSFR (Replication) to that server with no problem. I can NOT delete the bad Folder Target or Disable it, I get 'Access Denied'. I have other DFS Roots going to that server with no problems (the other Roots do Not have the bad server listed). 

    Large files count in DFS

    $
    0
    0

    Could you please advise what may be wrong - i have folder, that replicated between 4 servers, connected by WAN connections. A few days ago i've spotted that replication becomes very slow. I checked backlog file count on every server and found, that on one of them has very large file queue - around 500k files. On other servers backlog file count is around 100-500 files. The replicated folder contais around 600k files at all, how can be 500k in queue?

    Thank you very much
    Viewing all 13565 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>