Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Data Integrity Scan Doesn't Work

$
0
0

When I run the Data Integrity Scan manually within the Task Scheduler, it runs and completes immediately without doing anything even though I know there are corrupt files in the volume.  The following are the event log entries.  Notice that most of my files are skipped.  Why doesn't the scan attempt to fix corrupted files?

Integrity is both enabled and enforced for all of the several hundred files.  The ReFS storage space is a two-way mirror.  Both drives are attached via SATA and the motherboard BIOS SATA Configuration is AHCI.

Information:

Started checking data integrity.

 

Information:

Disk scan started on \\?\PhysicalDrive9 (\\?\Disk{ecb98218-784e-47d5-b316-941ae9595eb4})

 

Error:

Volume metadata scrub operation failed.

Volume name: I:

Metadata reference: 0x204

Range offset: 0x0

Range length (in bytes): 0x0

Bytes repaired: 0x0

Bytes not repaired: 0x3000

Status: The specified copy of the requested data could not be read.

 

Error:

Files were skipped during the volume scan.

Files skipped: 310

Volume name: I:\ (\??\Volume{53d99c4e-9ad6-11e8-8448-0cc47ad896dd}\)

First skipped file name: I:

HResult: The specified copy of the requested data could not be read.

 

Information:

Volume scan completed on I:\ (\??\Volume{53d99c4e-9ad6-11e8-8448-0cc47ad896dd}\)

Bytes repaired: 0x0

Bytes not repaired: 0x3000

HResult: The operation completed successfully.

 

Information:

Disk scan completed on \\?\PhysicalDrive9 (\\?\Disk{ecb98218-784e-47d5-b316-941ae9595eb4})

 

Information:

Completed data integrity check.






Bandwidth for single SMB operation

$
0
0

Hi!

I have two servers Win 2016 with network adapters 10Gbs (with RSS support, but without RDMA). If run 4-5 file operations in parallel (copying a few very large files), then the total speed will be 8-9Gbs. If  run a single operation - about 2. Question: what limits the speed of a single operation? Where to see, what to try to tune?

Thanks

Alex


WBR, Alex

Deduplication on Server 2016 only uses one core

$
0
0

Hello,

we use the deduplication-feature on a machine installed with server 2016 standard. One of the new features of server 2016 ist the multithreading ability. For me this sounds like the process of deduplication speeds up by using multiple cores...

Now we have setup up an test environment to check the performance increase (we use datadeduplication for backups and there are several TiB that need to be deduplicated every day), but there is no significant performance increase. The fsdmhost.exe runs with multiple threads, but in Ressourcemonitor it seems all the threads run on only one core - only one of the 8 core shows workload. The io usage of the diskarray is minimal, under 15%... so it seems, that the io-performance is not the bottleneck.

I've tried to adjust parameters for the job and the volume with no success: Job -Memory 100 -InputOutputThrottleLevel high/low -Cores 100 -Priority High and for the Volume -InputOutputScale from 0 over 10 and 24 to 36

What I'm doing wrong?

file and print sharing resource is online but isn't responding to connection attempts

$
0
0

Hello Experts,

I am facing issue with File Share over internet. I have a Server 2016 instance in AWS and I created few Shared Folders on that Server.

I am able to access those Shared Folder over public IP <\\Public IP\Share> from Server 2016 hosted in AWS, Azure, Google Cloud but unable to access the Shared Folder from Windows 10. I have enclosed the Windows Diagnostics Log here.

While troubleshooting the issue, I tried below steps but no luck. Please advice...

Allowed all traffics in AWS Security Group

Disabled Windows Firewall on Windows 2016 as well as Windows 10

Enabled SMB 1.0/CIFS Client on Windows 10

Tried to Telnet on port 445 but it was a failure

Disabled StrictNameChecking and SMB2Protocol on Server 2016. I used below commands:

Set-SmbServerConfiguration –EnableStrictNameChecking $False

Set-SmbServerConfiguration –EnableSMB2Protocol $False

Log Name:      System
Source:        Microsoft-Windows-Diagnostics-Networking
Date:          10-08-2018 6.55.26 PM
Event ID:      4000
Task Category: Diagnosis Success
Level:         Information
Keywords:      (70368744177664),Core Events
User:          LOCAL SERVICE
Computer:      XXXXXXX
Description:
The Network Diagnostics Framework has completed the diagnosis phase of operation. The following repair option was offered: 

Helper Class Name: TransportConnection

Root Cause:  file and print sharing resource (IP Address) is online but isn't responding to connection attempts.

The remote computer isn’t responding to connections on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn’t find any problems with the firewall on your computer.  

Root Cause Guid: {767897d8-7825-4413-ad95-d2ab2ca37281} 

Repair option: Contact the service provider or owner of the remote system for further assistance, or try again later 

RepairGuid: {36e90720-4fb8-4f74-a98f-f3ecce18873f} 

Seconds required for repair: 0 

Security context required for repair: 0

Interface:  ({00000000-0000-0000-0000-000000000000})
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Microsoft-Windows-Diagnostics-Networking" Guid="{36C23E18-0E66-11D9-BBEB-505054503030}" />
    <EventID>4000</EventID>
    <Version>1</Version>
    <Level>4</Level>
    <Task>4</Task>
    <Opcode>0</Opcode>
    <Keywords>0x4000400000000001</Keywords>
    <TimeCreated SystemTime="2018-08-10T13:25:26.607601500Z" />
    <EventRecordID>10620</EventRecordID>
    <Correlation ActivityID="{21855325-F05A-49E6-9D3B-593DCC7488C0}" />
    <Execution ProcessID="4380" ThreadID="9552" />
    <Channel>System</Channel>
    <Computer>xxxxxxx</Computer>
    <Security UserID="S-1-5-19" />
  </System>
  <EventData>
    <Data Name="RootCause"> file and print sharing resource (IP Address) is online but isn't responding to connection attempts.

The remote computer isn’t responding to connections on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn’t find any problems with the firewall on your computer. </Data>
    <Data Name="RootCauseGUID">{767897D8-7825-4413-AD95-D2AB2CA37281}</Data>
    <Data Name="RepairOption">Contact the service provider or owner of the remote system for further assistance, or try again later</Data>
    <Data Name="RepairGUID">{36E90720-4FB8-4F74-A98F-F3ECCE18873F}</Data>
    <Data Name="SecondsRequired">0</Data>
    <Data Name="SIDTypeRequired">0</Data>
    <Data Name="HelperClassName">TransportConnection</Data>
    <Data Name="InterfaceDesc">
    </Data>
    <Data Name="InterfaceGUID">{00000000-0000-0000-0000-000000000000}</Data>
  </EventData>
</Event>


Thanks & Regards, Prosenjit Sen.

Windows 2016 Workfolders monitoring

$
0
0
Hi Team,

while troubleshooting Workfolders issues on Windows Server 2016 we noticed that the result of get-SyncUserstatus and also the users properties in server manager differs from what we see in the web for Windows 2012 R2.
It seems in Windows 2012 R2 get-SyncUserstatus will list some more details, like LastAttemtedSyny, LastSuccesfulSync etc.
see
https://blogs.technet.microsoft.com/filecab/2013/10/15/monitoring-windows-server-2012-r2-work-folders-deployments/
There are no such details in Windows 2016 Server.

One of our current problems is, that user getting just getting "Problem while connecting to server" (sorry had to translate from German)
and in

Microsoft-Windows-WorkFolders/Operational
Event ID 2100 "Error while connecting to server" error: (0x80072ee2)

That's basically means a timeout.
It seems to be temporally problem and occurs also when the machine is connected via LAN very well to the server.
the server log syncshare//Operational logs nothing at this time.
Is there another log on the server where we can see if the request arrived the server or not, such the w3svc log.

Thanks
Eckhard



Eckhard

Server to Server stroage replication Many to One

$
0
0

Hi,

I understand how you set up <g class="gr_ gr_17 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar only-ins doubleReplace replaceWithoutSep" data-gr-id="17" id="17">server</g> to server storage replication but can you do it a many to one relationship?

For <g class="gr_ gr_28 gr-alert gr_gramm gr_inline_cards gr_run_anim Punctuation only-ins replaceWithoutSep" data-gr-id="28" id="28">example</g> if <g class="gr_ gr_29 gr-alert gr_tiny gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="29" id="29">i</g> have 10 file servers  "Company A" and "Company A" has a DR site with loads of storage, can set up one 2016 server at my DR site and get the file servers to replicate there to this one server? or is it a one to one relationship? 

We've using DFS at the moment but <g class="gr_ gr_58 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling multiReplace" data-gr-id="58" id="58">its</g> not working that well (3 support calls with <g class="gr_ gr_89 gr-alert gr_spell gr_inline_cards gr_run_anim ContextualSpelling ins-del multiReplace" data-gr-id="89" id="89">microsoft</g> in 1 year as it keeps failing). 

Cheers

Richard


Server 2008 SP2 Lost default display columns in file explorer e.g Name / Size / Type

$
0
0
This is the most obvious of various issues, I am just posting this in the hope that it reminds anyone of similar issue.  This server crashed overnight, was struggling with lack of virtual memory and any IO recently.  After restarting this morning I cannot see default file attributes, just last saved date.  I cannot post an image as a newbie, maybe I can a bit later.

The folder pane can display the name, as can the 'Computer' window, along with any mapped drives to the server.  However the details pane will not show the Name / type / size, and furthermore because it is default I cannot just add it back.  Unfortunately this is not the only issue for example internal processes/services are not running as expected e.g I click the windows update icon in the task bar and the GUI flashes up on the screen the disappears.  I have run sfc /scannow and the first time it failed (just disappeared after reaching 100%), the second time it reported that it had fixed some corruptions.  After several restarts the issues remain. Same issues are present in safe mode.  Trawling through the www to llok for similar issues all I can find is the suggestion to re-register shdocvw.dll, however since it's a live server and people can still access I dodn't want to tempt fate until I full understand the scope of that dll.

Anyway, I am pretty resigned to the fact it's just in need of a rebuild/replace.  However if this means anything to anyone please let me know.  Many thanks.


FTP Permissions for users

$
0
0

Hi all,

I have one FTP server on that FTP i have to give FTP permissions to one of our team having 10 members. I have already installed and configured IIS on windows Server which we are using to provide access to FTP folders. I have given FTP folder access to all the members of that group but now all members are facing one issue. 

They are able to access the main FTP folder but when they are trying to open the subfolder they are getting error "you need permission to access this folder. Error 550". All members are using Windows 10 Enterprise OS. 

Need help in resolving this. 

Thanks...


Windows Server 2012 NFS server passwd sample

$
0
0

Does anyone have a sample passwd file they use for identity mapping?

I placed a passwd file in C:\Windows\System32\Drivers\etc with the following contents:

root:x:0:0:root:/root:/bin/bash

On reboot, Windows did not accept the file, because in the event log under Applications and Services\Microsoft\Windows\ServericsForNFS-Server\IdentityMapping I get a message warning that NFS file access is impaired:

Server for NFS is not configured for either Active Directory Lookup or User Name Mapping.

Without either Active Directory Lookup or User Name Mapping configured for the server, or Unmapped UNIX User Access configured on all shares, Server for NFS cannot grant file access to users.

Configure Server for NFS for either Active Directory Lookup or User Name Mapping using the Nfsadmin command-line tool, or Unmapped UNIX User Access using the Nfsshare command-line tool.

NTFS permission removed when deleting a subfolder

$
0
0

Hello everyone,

I do have a very strange issue with my file server. Let me first describe the infrastructure.

OS: Windows Server 2016
Roles: File and Storage Services
Type: Member of a 2016 Domain

On the file server I do have to following structure/permission:

  • F:\
    • ANWDTest
      • ZZZ
        • DIMS
        • Wagenbuch

The NTFS permissions to those folders is like that:

  • ANWDTest
    Inheritance disabled
    CREATOR OWNER - Full control - Subfolders and files only
    SYSTEM - Full control - This folder, subfolders and files
    Administrators - Full control - This folder, subfolders and files
    L_NTFS_J_R - Read & execute - This folder only

  • ZZZ
    Inheritance enabled
    L_NTFS_J_ZZZ_R - Read & execute - This folder only

  • DIMS
    Inheritance enabled
    L_NTFS_J_ZZZ_DIMS_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_DIMS_W - Modify - This folder, subfolders and files

  • Wagenbuch
    Inheritance enabled
    L_NTFS_J_ZZZ_Wagenbuch_R - Read & execute - This folder, subfolders and files
    L_NTFS_J_ZZZ_Wagenbuch_W - Modify - This folder, subfolders and files

So far I think this is nothing special, now here is my issue:

When I delete the "Wagenbuch" or the "DIMS" folder this does remove the group "L_NTFS_J_ZZZ_R" from the "ZZZ" folder AND does remove the group "L_NTFS_J_R" from the "ANWDTest" folder... and I do have absolutly no idea why this is happening.

Does anyone see an error in the setup or did face similar issues? I am totally lost here, even no idea where to start searching.. Google did also not help at all.

Thanks for the support!


UPDATE 1: To be sure that is not an issue of our file server - I did setup the same structure on an other 2016 server, and did face the same issue.

UPDATE 2: In the meantime I did the same setup on a 2012 R2 server and there is no issue at all, so this seems to be related to Server 2016.

Windows Server 2012 Storage Spaces Simple RAID 0 VERY SLOW reads, but fast writes with LSI 9207-8e SAS JBOD HBA Controller

$
0
0

Has anyone else seen Windows Server 2012 Storage Spaces with a Simple RAID 0 (also happens with Mirrored RAID 1 and Parity RAID 5) virtual disk exhibiting extremely slow read speed of 5Mb/sec, yet write performance is normal at 650Mb/sec in RAID 0?

Windows Server 2012 Standard

Intel i7 CPU and Motherboard

LSI 9207-8e 6Gb SAS JBOD Controller with latest firmware/BIOS and Windows driver.

(4) Hitachi 4TB 6Gb SATA Enterprise Hard Disk Drives HUS724040ALE640

(4) Hitachi 4TB 6Gb SATA Desktop Hard Disk Drives HDS724040ALE640

Hitachi drives are directly connected to LSI 9207-8e using a 2-meter SAS SFF-8088 to eSATA cable to six-inch eSATA/SATA adapter.

The Enterprise drives are on LSI's compatibility list.  The Desktop drives are not, but regardless, both drive models are affected by the problem.

Interestingly, this entire configuration but with two SIIG eSATA 2-Port adapters instead of the LSI 9207-8e, works perfectly with both reads and writes at 670Mb/sec.

I thought SAS was going to be a sure bet for expanding beyond the capacity of port limited eSATA adapters, but after a week of frustration and spending over $5,000.00 on drives, controllers and cabling, it's time to ask for help!

Any similar experiences or solutions?



Set-DfsnFolderTarget : The requested object could not be found.

$
0
0

Getting error when running: Set-DfsnFolderTarget to use FQDN

Set-DfsnFolderTarget : The requested object could not be found.

Unable to (Re)Add Disk to Storage Pool - Server 2012 R2 Essentials

$
0
0

Hi, in setting up and testing Storage Spaces on a brand new WSE 2012 R2 installation (all patches applied), I created a basic 2-disk Storage Pool with a fixed-provision Mirror Space (maximum size) with a standard NTFS volume.  I added some data to the volume. 

To simulate a drive failure, I then shut down the server and pulled one of the pool disks.  I took the removed disk to another system and reformatted it (deleting whatever Storage Pool data it had contained), then attempted to add it back to the system as a new disk. This fails.

I followed the steps exactly as outlined in the Microsoft Storage Spaced FAQ.  The Storage Pool shows as Degraded, and the (old) disk shows as Retired.

However, I am unable to add the disk to the pool either through the client GUI or via Server Manager.  In Server Manager, the Add Physical Disk command is grayed out and unavailable, while in the client GUI I can select the disk, but the "Add Drives" process fails with the following error: The system can't find the file specified (0x00000002).

The new (old) disk shows up in Disk Manager as a Basic disk with GPT partition style, and I made sure it is Unallocated before hand. Here is the output from Get-PhysicalDisk (where disk 1 is the "missing" disk):

FriendlyName        CanPool             OperationalStatus   HealthStatus        Usage                              Size
------------        -------             -----------------   ------------        -----                              ----
PhysicalDisk3       False               OK                  Healthy             Auto-Select                     3.64 TB
PhysicalDisk4       False               OK                  Healthy             Auto-Select                     3.64 TB
PhysicalDisk8       False               OK                  Healthy             Auto-Select                     1.82 TB
PhysicalDisk9       False               OK                  Healthy             Auto-Select                     1.82 TB
PhysicalDisk0       False               OK                  Healthy             Auto-Select                     3.64 TB
PhysicalDisk-1      False               Lost Communication  Warning             Retired                         3.64 TB
PhysicalDisk2       False               OK                  Healthy             Auto-Select                     3.64 TB
PhysicalDisk6       False               OK                  Healthy             Auto-Select                   119.24 GB
PhysicalDisk5       False               OK                  Healthy             Auto-Select                     3.64 TB

It appears to me that the new (old) disk is colliding with itself: as in the system won't let me add it to the pool because it already exists in the pool. However, I can't find a way to "force" the system to remove the retired disk altogether, so that I can add it back again as a "new" disk.

This experience suggests to me that Storage Spaces still seems fragile.  I realize this is an artificial scenario, but is there a way to recover from this other than deleting the pool and recreating from backup?


How would we get Shadowcopy to send notification of status - e.g. successful shadowcopy - with some stats. Either email or event log alerts would do.

$
0
0

Hi,

How would we get Shadowcopy to send notification of status - e.g. successful shadowcopy - with some stats.  Either email or event log alerts would do. 

Any alerts or reports?

thanks,

LUN to VHDX move with DFS Enabled.

$
0
0

Current Setup:

  1. 4 windows server 2012 R2 with IIS role.
  2. Running on Hyper-V 2012 R2 with failover clustering and live migration capabilities.
  3. DFS enabled on all web servers.
  4. Each server has a LUN attached to it with the size of 50 GB.
  5. A folder on each LUN created with the name of DFS, which is used by DFS obviously.
  6. IIS configuration file was moved from default location to a folder inside of the DFS folder. This allows the configuration to be shared among all web servers.

Current setup works great, so I am not troubleshooting anything, however, I would like to change the LUNs with VHDX file for each VM. In case you’re wondering why I want to do this, the reason is that I am using Veeam to back up servers, and the backup is able to backup any data stored on virtual disks only, so data stored on those LUNs is not being backed up right now.

Of course, I could unnecessarily spend the money and get a windows host license for each VM and that will take care of the problem, but I would rather use the license that’s applied at the Hyper-V host level instead.

I am not entirely sure of the best way to do this, so I am seeking experts’ advice, as these web servers are very critical and run mission critical web applications and services.

I look forward to hearing from you all on the best way to do this.

Thanks in advance


HelpNeed


Quota Template for user's home folder

$
0
0

I have several quota templates configured for different user's home folder, so user can't use over the quota template limits which they belong to. Recently, I found more and more users have space usage issue after they reached quota templates limits. 

For example, userA is belong to 1GB template for his folder. He ran out of space, so he deleted all his files, but it still shows o space on client side. Also under File Server Resource console, I still see userA used 100% of his space. To help userA gets back his space, I will have to delete userA's template, and re-apply it. Can someone tell me why this happened? Thanks!

 

Mapped Drive Showing Incorrect Storage Used

$
0
0
My organization uses Windows Server 2008 R2 for our file server. Each user has a mapped drive with 10 GB of storage space, which is mapped to a folder within the file server. One user has a mapped drive which is showing as almost full (188 KB of storage space remaining). However, when I check on the file server itself, the folder is only 1.5 GB. I have tried remapping the drive on the user's computer, but it still shows up as nearly full. When checking the folder with WinDirStat, it also shows as only 1.5 GB. The OS of the user's computer is Windows 10 Pro. I've looked around, but cannot find any solutions for this problem. Any help would be appreciated!

Is there a way to have more than one path to a specific file? Like an alias for a path?

$
0
0

The problem is this:

We had a user that about 6 months ago decided to rename 4 folders in a Data Directory used to store data for a database.  As this was done 6 months ago, many new records have since been created using the renamed folders in their document paths.  

However, any records created before she renamed those folders have the original folder name in the path for the documents that belong to those records.  Needless to say, this is something that NEVER should have been done.  But i have to deal with the current situation if i can find a way to fix things.

My first thought was to recreate the original folder names and put a copy of every document in them that exists in the renamed folders.  This way, when the database is used, it wont come up with "document not found errors" due to those documents now existing with just a single folder name being changed in their path.

However:  There are thousands of documents that would have to be copied as these are huge folders.  And there is no easy way to sort out which were stored with the original folder names in their path and which were stored with the new name in their path.

I was wondering if there was a way to create virtual junction of sorts or a path "alias" where Windows would treat a path with both the old name and the new name in as if they were the same?  

Example  R:\Folder\A\filename.docx   =   R:\#1 Folder\A\filename.docx.   

With All the documents actually existing only in the new path.

The person who did this did exactly that when she created this problem.  She renamed four of the already in use main data folders by adding #1,#2,#3,#4 to the front of those folder names

No one even noticed as no one has tried to look up any of the filed document until now.  And of course by now there are hundreds if not thousands of documents that have their paths stored as R:\#1 Folder\A\filename.docx  as well as the original thousands that had their paths stored as R:\Folder\A\filename.docx.

I would like to find a way (if one exists) to make Windows treat any calls to find files located at either path end up going to the one as it is named now. 

I have already contacted the database software company to see if we can mass edit the records in the database to change those paths as that would probably be the most logical solution but all of their software is proprietary.  Unless we can get them to do it, it is unlikely that an open-source tool is available to work on their database structure.

I am also open to any other ideas that anyone can offer.  Keeping two copies of every document in the database just so they would exists at the end of both paths would be extremely wasteful of space but may be our only alternative if we cannot find another way


DFS management does not work on second DC and file server

$
0
0

Hi,

We have DC1, DC2, fileserver and they has DFS roles installed

DC2 is FSMO roles master and it is only one from which I can access DFS management

From DC1 and fileserver I get error:

\\domain\namespace: Delegation information for the namespace cannot be queried.  The specified domain either does not exist or could not be contacted.

Is it normal that I cant access DFS management form DC1 and fileserver ?

Is it true: http://www.itsupportforum.net/topic/the-namespace-cannot-be-queried-the-specified-domain-does-not-exist/

ssd for cache and ssd for performance tier in S2D cluster

$
0
0

Hi I have plan to build s2d two node cluster with 4x 800 12gb SAS Write intensive SSD for cache, 2x960 GB SATA mixed use for performance tier and 10x 2.4 TB SAS 10k for capacity. It's possible to have such scenario. Is it supported?

Thank you OOSOO

 
Viewing all 13565 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>