Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Even though the file is closed by the client in the shared folder I still can see it open when I query it with openfiles.exe

$
0
0

Dear Experts

In order to be more clear I would like to describe the problem step by step:

1-A client (64Bit Win8.1) puts a pdf file on a mapped shared folder on a virtual Windows Server 2012 standart.

2-Another client (64Bit win7 or 32 bit win7 or 32 bit win8.1) opens the file, checks whether it belongs to him/her.

3-She/he tries to delete the file

4-she/he can not delete it.

5-I am informed of the problem and I check the status with openfiles.exe on the server and can see that the file is still open.

6-I confirm that the file is closed by the owner and the second user.

Here is the question, why do not all the closed files on the client side not closed on the server side? How can I stop this event?

Thanking you in advance for your support.

Regards


Shadow Copy restore

$
0
0

Trying to restore shadow copies of files that have changed in error.

If on the file server (Windows 2008 r2) you drill down to the file that needs to be restored and right click "Restore previous version" no previous version show up.  However if you right click on the parent directory and right click "Restore previous version" you can then open up a previous version of the directory and copy/open the file from that window. 

Is there any reason why you cant just select the file with out having to go via the parent directory?

Data Deduplication and File Backup

$
0
0

Hello all,

This is my first question here in the forum. I just migrated one of our server from Server 2008 R2 to Server 2012 Standard R2. I use Freefilesync to backup a couple of network storages that was attached to the server 2008 and now it is attached to the Server 2012. I would like to know if I apply data deduplication on the network storage that are now attached to the Server 2012 would that affect the files on my backups? How would that work? Thanks

User Profile Disks and DFS replication

$
0
0
We're looking to replace a single, heavily used 2003 TS server--for performance, feature, and capacity reasons.  Our users frequently have small amounts of data on their desktops/settings that we'd to persist between sessions, and they tend to have long-running sessions (disconnecting and reconnecting while traveling, but not logging off--keeping open apps in the meantime).

The servers were originally purchased intending for 2008R2 SP1 remote desktops services, with lots of fast internal (RAID 5, 12x300GB 15K) storage.  We're considering the option of using 2012 (virtual sessions, not virtual desktops) so that we can scale out as we grow--and use the User Profile Disks.  

Ideally, we'd like to maximize the usefulness of the purchased servers (and their internal storage), and not have to purchase additional hardware for shared storage--(iSCSI/External array that can be clustered).  We're wondering if it's possible to pair the User Profile Disks (UPD) with DFS replication (possibly over a dedicated NIC). Then, a user could log in to server X, connect to her local UPD (with the changes replicating to server Y's copy).  If she disconnected/reconnected, the RD connection broker would connect her to back to her existing session, and if she logs off and back on, she could connect to either server X or Y and it would all work.  For maintenance, we'd be able to drainstop one server via the connection broker, perform the maintenance, let DFS catch up, and then do the same on the second server.

Would something like this be possible?  Or is it just asking for major problems?

2012R2 Storage Spaces - Enclosure redundancy

$
0
0

Hi,

We are currently testing redundancy with Storage Space and have ran into a big problem..

Here is a description of our setup (I'll try to be as precise as possible):

Two HP DL360 Gen8 servers with 2x10 Gbe Ethernet cards and 2 LSI 4 SAS external ports, each connected to 3 DataON JBOD enclosures via dual SAS paths (2 SAS cables per servers going to 2 separate controllers on each enclosures).

The 2 10Gbe Ethernet cards are setup in separate network (10.0.0.0/16 and 192.168.0.0/16).

The 10.0.0.0/16 network is part of the Windows domain and host the DNS servers.

The 192.168.0.0/16 network is independent and only accessible by the above servers (no DNS defined, No default gateway).

I installed failover clustering and built a new cluster with those two servers, making sure to untick the “add available storage” from the wizard.

The cluster built successfully, so I proceeded to build the storage pool..

On one of those servers, I created a Storage Pool using all the disks from all 3 DataON enclosures (The disks are composed of 32x SAS HDD and 12x SAS SSD (Dual ports)).

And on top of this Storage Pool, I created two virtual hard disk:

                One small 1GB virtual hard disk for the Quorum (non-tiered, enclosure awareness enabled, mirrored)

                One large 15TB virtual hard disk for the data (Tiered Storage, enclosure awareness, write-back cache and mirrored)

As a reference, here are the powershell commands I used to create the virtual disks and the storage pool:

$pooldisks=Get-PhysicalDisk|? {$_.CanPool–eq$true }

New-StoragePool-StorageSubSystemFriendlyName *Spaces*-FriendlyName SP1-PhysicalDisks $pooldisks

$tier_ssd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName SSD_TIER-MediaType SSD

$tier_hdd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName HDD_TIER-MediaType HDD

New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'VD1'–StorageTiers @($tier_ssd,$tier_hdd) -StorageTierSizes @(2212GB,13108GB)-ResiliencySettingName Mirror-NumberOfColumns 4-WriteCacheSize 10GB-IsEnclosureAware $true

New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'Quorum'-Size 1GB-ResiliencySettingName Mirror-IsEnclosureAware $true

 

So far so good, I then added the storage pool to the cluster using the failover cluster manager, then added the two disks created above (created a volume within first).

I then added the bigger disk to the Cluster Shared Volmue.

Added to second disks (smaller one) as a quorum to the cluster.

In the failover cluster manager, I added the Scale Out File Server role (used the name 999SAN01P001 as the distributed server name) , and created a highly available share on the Cluster Shared Volume (now appearing under c:\clusterStorage\Volume1\Shares\Hyper-V).

I can now access the share via \\999SAN01P001\Hyper-V without any problem and even run Virtual Machines on it.

Here is the problem:

If I eject a couple of disks from one of the enclosures, no problems, everything stays available.

If I however simulate an enclosure failure (by pulling the power), the Cluster Shared Volume becomes inaccessible!

The “Cluster Virtual Disk” status in the failover cluster manager shows as “NO ACCESS”.

The virtual disk in Server Manager (under the File and Storage Services), although shows as “Degraded” is still accessible (not offline).

What am I doing wrong here?

With three enclosures, the system should be able to sustains a failure of a complete enclosure (and it does as my virtual disks in server manager shows online, but degraded), but my cluster cannot access it anymore (the cluster shared volume as “no access).

Thank you,

Stephane

iSCSI MPIO with only one LUN

$
0
0
If I have a Server 2012 MPIO iSCSI connection to the storage server but I'm only connecting to one LUN will that provide any kind of increased performance or is the benefit just redundancy at that point?

Can't extend NTFS volume

$
0
0

Hello,

I can't figure out why I can't extend an NTFS volume sitting on an iSCSI LUN.  The 'extend volume' option is greyed out, as shown in the following screenshot.

OS: Windows Server 2008 R2

Please help?


Work Folder System Tray executable error

$
0
0

We're currently looking to deploy Work folders across our organization now that a Windows 7 client is available, however we receive a .Net application error when installing the Work Folders client on a Win7 machine, and configuring via Group Policy.

Details on the config:

  • configured via GPO (user a member of an AD group)
  • folder encryption enabled
  • password policies disabled
  • Folder re-direction applied to move My Docs, Favorites and Links into the Work Folders folder

Upon login or when accessing the Work Folders config app, and error appears on screen indicating WorkFoldersSystemTray.exe "Work Folders has stopped working" and "Check online for a solution later and close the program".  Work folders continues to sync however and it does not appear to cause any fault from any actual application level.

The following event error is recorded in the application log:

Fault bucket 97957300, type 22

Event Name: CLR20r3

Response: Not available

Cab Id: 0

Problem signature:

P1: workfolderssystemtray.exe

P2: 6.3.9600.17021

P3: 5344cbf1

P4: mscorlib

P5: 2.0.0.0

P6: 5174de33

P7: 2d4f

P8: 2b

P9: System.UnauthorizedAccess

P10:

Attached files:

C:\Users\zak\AppData\Local\Temp\WER7498.tmp.WERInternalMetadata.xml

These files may be available here:

C:\Users\zak\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_workfolderssyste_66c088aba79d83e91b9427c7767d96cfc09acd64_09ee7f52

Analysis symbol:

Rechecking for solution: 0

Report Id: c7633f34-539f-11e4-8531-028037ec0200


The obvious sign would be the "P9: System.UnauthorizedAccess" error, however I don't know how this related to the actual application.  Would anyone be able to provide assistance? 


Customizing File Management Task with powershell action in Windows 2012 R2

$
0
0

Does anybody can show me how to create a file management task with the custom action type, and use powershell as the excutable setting in windows 2012 R2 FSRM.

I searched lots of blogs and vedios,but there were no powershell example.I want custom a management task to move the files which have classification property of  "confidential" to a new folder.

How can i do this?  Please show me the powershell script and how to set the Arguments.

Hope for you guys reply,and many thanks of your help.


Seagate external hard disk RAW file system error

$
0
0

Have you ever got a RAW file system problem with your Seagate external hard disk? In fact, a couple days ago, I encountered this problem and always was asked to format before accessing anything inside. However, last week, I had transferred many videos, files and pictures of my favorite animated satires to this hard disk and had not found time to upload them to my online storage. I really don’t want to search and gather these stuffs one by one again. Do you have any way for me to rescue them back from this drive? Do I have to format this disk to fix that RAW problem? Any answer here could be greatly appreciated! 

Copying to one storage pool affects services on another

$
0
0

Hello everyone,

I have a Windows Server 2012R2 that is used as a file server as well as an application server. When I am copying data to the fileserver shares (StoratgePool A) it heavily impacts my application what is located on a different storage pool (B) (up to 15 seconds of write-latency).

My setup:

StoragePool A with 3 HDDs
- 2 disks with parity enabled
- 1 volume / disk using ReFS
- my file shares are on one of these volumes
StoragePool B with 1 SSD
- 1 Disk
- 1 Volume using ReFS

I do not see any limits on CPU or RAM (i3-4370 - 3.8GHz | 16GB DDR3 RAM), neither of these are heavily utilized. In perfmon I can see pretty bad sec/transfer values for my file share (10-20s) but not for my application disk (3ms at max). Also I noticed the copy process goes really fast for some seconds and then stops abruptly for a while. At the same time RAM is being used up. I assume this is FS caching? I don't know if this could be a problem here, but is there a way to change this behaviour for ony volume only? Overall my transfer rate is at 20-30 MB/s for my file shares what is pretty low for my standards as I ran a different setup before with the same disks and got about 50-60MB/s transfer rates using ZFS on freeBSD.

Where do I have to look to find the bottleneck here? Why does one storage pool affect another? Is there a way I can tweak the performance of my HDD pool?

I would be grateful for every hint!

Edit\ Some additional information:

I just ran a disk benchmark for both the share storage pool as well as the storage pool my affected application is sitting on. As expected the app-SP was as fast as an SSD is but for the share-SP I just have to post the following (since I am not allowed to embed images in my posts I can just give you a link):
http://imgur.com/YE7nWNL
0.5/0.1 random read/write? That's really slow. Also I found out that during the testing process disk latency was much more normal than what it was when copying the large amount of files (this time I only had about 40ms peaks instead of 15s!). Additionally I noticed that no RAM was comsumed during the benchmark so I guess no filesystem caching this time. So it probably has something to do with FS caching. I hope this helps finding a possible solution.




Migration FRS ->DFSR (Sysvol) 2 questions.

$
0
0

Good day to all.

I apologize in advance for my bad english.

I have learned this SYSVOL Replication Migration Guide: FRS to DFS Replication and blog SYSVOL Migration Series. Then I successfully completed the migration FRS->DFSR, but I still have 2 questions:

1. There is a step "force AD replication" (repadmin /syncall /A /e /d), for example in article about migration to the "prepared" global state. But why this step is at the "Monitoring" stage? I mean there is point to force replication immediately after setting the global state to accelerate the reception of migration directive by domain controllers. But why force replication when the migration to the specified state is done? The same question about dfsrdiag /pollAD.

2. There is information, that SYSVOL_DFSR will contain only "domain" and "sysvol" folders and it is true. Should I manually create "staging" and "staging areas" with junction point inside or not? I do not really know what these folders are needed for.

Thanks.


Disks not avialable and cannot add new disks to existing pools

$
0
0

I have a few disks that are RAW and I want to add them to existing pools.  However, only 1 disk shows up as a Primordial disk and the others do not.   I cannot add these drives to a new volume and when I try to add any drive to an existing pool, the add disk option is greyed out.  I have search everywhere to resolve this but no luck.

What is interesting is one of the drives that shows available for a volume is the drive I have associated to backing up the OS on the server itself and is initialized, i.e. not RAW.

Network speed affected by large file copy operations. Also, why intermittent network outages?

$
0
0

Hi

I have a couple of issues on our company network.

The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min. The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.

We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.

When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.

While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.

I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.

Ordinarlily we work with many files

Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.

I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.

Thanks

Branch Cache Deployment

$
0
0

Hi Techies,

I need to deploy a Branch Cache Solution for one of the client’s remote office. Users in the remote office need to access 3 files shares from two different locations (parent or main office). Below are the complete setup details:-

1. Branch Office:-

a) Number of Users: 50

b) Local Servers: Yes (Domain Controller and File Server)

c) Client Operating System:Windows 7 Enterprise

d) Local File Server O.S (For Branch Cache):Windows Server 2012 R2 Standard x64

e) Clustered: NO

2. Main Office 1:-

a) File Server O.S(For Content server):Windows Server 2008 R2 Standard x64

b) Clustered: NO

3. Main Office 2:

a) File Server O.S (For Content Server):-Windows Server 2008 R2 Enterprise x64

b) Clustered: YES

4.Network Bandwidth: The branch office is connected to the main offices through4 MBPS MPLS cloud link.

Based on the above environment, I need to deploy a Branch Cache solution and have few questions below:-

1. Can I have partner servers with different Operating systems (Server on remote site is 2012 R2 and Content Servers on Main offices are 2008 R2 both Standard and Enterprise respectively)

2. Is Branch Cache service Cluster aware as one of the above main office server is clustered for File services.

3. I know the recommendation for 50 clients is distributed cache model, but I do have a local File server in remote location. providing this which one is the best solution and effective one?

Regards,

Imran Khan


ReFS and Hardware RAID

$
0
0

Hi there,

Just wanting to confirm if ReFS can be used with a RAID 6 Hardware Controlled system - or must it be used in conjunction with Storage Spaces? I have 3 x 80TB Volumes across a bunch of devices and I'm tossing up between moving to Linux and formatting EXT4 File System, or keeping ReFS if it will work on top of the RAID Controller?

Thanks.

Zak.

DFS Namespace New Folder Issue

$
0
0

Quick Overview Current Setup

2 Windows Server 2012 R2 with DFS and DFS Replication roles/features installed (File01 and File02)

Read and Followed TN Guide http://technet.microsoft.com/en-us/library/cc732863.aspx

Created Namespace on C:\DFSRoots\Shares, Share permissions: Everyone Full

NameSpace: \\domain\shares, added file02 as secondary namespace server, File02 Override Referral set to "Last Among All Targets"

Here is the issue I am having and not fully understand. I create a new folder under the root \shares, added folder target on both servers. DFS Creates the folder on the network share but has the shortcut icon on the folder.

I cannot open the folder locally or via the namespace, what gives?
Error: c:\dfsroots\shares\foldername is not accessible. The network location cannot be reached.

If I go to the \\namespace\shares\ I see the foldername with the shortcut icon on the folder again. I click on the folder and it keeps looping. What gives again?

Any further insight into this DFS(R) is greatly appreciated as I am confused because the TN article did no help nor did a blog page with step by step information: http://mizitechinfo.wordpress.com/2013/08/21/step-by-step-deploy-dfs-in-windows-server-2012-r2/

Windows XP client is unstable connects and access data/documentation on file server 2003 while windows 7 client still connect normally

$
0
0

Dear ITs Pro,

At the moment, i'm encounter the issue that Windows XP clients is unstable connects and access data/documentation on file server 2003 while windows 7 clients still connect normally. So sometimes our productions is delays. Who's can help me to check and resolving the issue, thanks in advance!

DFSR - Server 2008- replication issue - $db_dirty$

$
0
0

If I navigate to the Volume my DFS folders are located - \System Volume Information\DFSR\dabase_xxx\ I have a file caled $db_dirty$.  I found this article which I am unable to remove the database_xxx folder - http://www.leversuch.co.uk/solved-dfs-error-the-replication-group-is-invalid/

What does this mean?  When I stop the DFSR service it changes to db_clean and when it is running it says db_dirty.  Is this OK?  The reason I ask is because I am having an issue with a particular repliacted folder group.  I have 4 repliacted folder groups and 3 are replicated flawlessly and 1 is not.  I want to be sure the db_drity has nothing to do with my issue.

Any help is greatly appreciated.

Zach


Zach Smith

Server 2012 R2 File Server Stops Responding to SMB Connections

$
0
0

Hi There,

Massive shot in the dark here but I am struggling with a pretty major issue atm.  We have a production file server that is hosted on the following:

Dell MD 3220i -> iSCSI -> Server 2008 R2 Hyper-v Cluster -> Passthrough Disk -> Server 2012 R2 File Server VM

Essentially 3 times now, roughly a month or so apart.  The file server stops accepting connections.  During this time, the server is perfectly accessible through rdp or with a simple ping.  I can browse the files on the server directly but no-one appears to be able to access the shares over SMB.  A reboot of the server fixes the issue.  

As per a KB article I removed nod antivirus from the server to rule out a conflicting filter mode driver after the second fault.  Sadly yesterday it happened again.

The only relevant errors in the servers log files are:

SMB Server Event ID 551

SMB Session Authentication Failure Client Name: \\192.168.105.79 Client Address: 192.168.105.79:50774 User Name: HHS\H6-08$ Session ID: 0xFFFFFFFFFFFFFFFF Status: Insufficient server resources exist to complete the request. (0xC0000205) Guidance: You should expect this error when attempting to connect to shares using incorrect credentials. This error does not always indicate a problem with authorization, but mainly authentication. It is more common with non-Windows clients. This error can occur when using incorrect usernames and passwords with NTLM, mismatched LmCompatibility settings between client and server, duplicate Kerberos service principal names, incorrect Kerberos ticket-granting service tickets, or Guest accounts without Guest access enabled

and

SMB Server event ID 1020
File system operation has taken longer than expected.

Client Name: \\192.168.105.97
Client Address: 192.168.105.97:49571
User Name: HHS\12J.Champion
Session ID: 0x2C07B40004A5
Share Name: \\*\Subjects
File Name:
Command: 5
Duration (in milliseconds): 176784
Warning Threshold (in milliseconds): 120000

Guidance:

The underlying file system has taken too long to respond to an operation. This typically indicates a problem with the storage and not SMB.

I have checked the underlying disk/iscsi/network hyper-v cluster for any other errors or issues, but as far as I can tell everything is fine. 

Is it possible that something else is left over from the NOD antivirus installation?  

Looking for suggestions on how to troubleshoot this further.

Thanks


Viewing all 13565 articles
Browse latest View live