Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Support for Work Folders Environment

$
0
0

Hi All,

We plan to build Work Folders based on Windows storage server 2016 using HP StoreEasy hardware. We plan to implement on two AD sites. Two Windows Storage Servere 2016 will be configured as DFS servers with two ways replication : \\corpdomain.com\WorkFolders\ .

Users on site AD-1 will be connected to windows storage server located on site AD-1 and users on site AD-2 will be connected to windows storage server located on site AD-2. Work Folders will be installed on those two servers.

Is this configuration supported for Work Folders? Could we use single login url for all users or we should build two separate url for work folders access?


NTFS and Shared folder access

$
0
0

Hi!

We have a 2012 R2 File server, on said server, we have a shared folder, with some subfolders, one of these subfolders have inheritance disabled, and its access restricted to a AD security group. A basic layout so to speak. The security group have full access, aswell as the built-in "Administrators" group on the server.

Under Advanced Sharing on the share itself, "Everyone" has read and change permissions, and the previously mentioned "Administrators" has everything.
As far as I'm concerned, the most restrictive folder access should be the one determining the users access, which in this case is the NTFS security AD group, however, something else is tampering with the permissions.

The issue:
Some users, random users, can still access and read/Write/change everything in this restricted folder, and I cannot for the life of me find a common denominator between these users.

Formatting an HP 3Par SAN Storage Disk presented to Windows 2012 R2 takes long time to complete.

$
0
0

Dear Brothers,

I have an observation, that the Drive presented to my Windows 2012 R2 Server from HP 3Par SAN Storage System, takes long time to format.

Windows Server 2012 R2 Server

- Non Clustered Server - Directly connected to HP 3Par San Storage via Fiber Channel

HP 3PAR SAN Storage Presented Disk:

-Drive T: - 4 Terabytes - Label:(3PAR_02)-Drive T: --- Still formatting for 4 Hours and still not done.

-Drive H: - 4 Terabytes - Not yet Formated

-Drive I: - 4 Terabytes- Not yet Formated

-Drive J: - 4 Terabytes- Not yet Formated

Question:

I believed this issue is more on Windows 2012 R2 matter than to HP 3Par SAN Storage matter.  

Is there any thing I ma missing such as missing patches or procedure for this kind of setup?

Regards,


Windows server 2016 file services issue - 2 Users getting read-write copy of same document

$
0
0
We have a windows 2016 file server with a DFS namespace.
Users have their shared drive "S" pointed to a share called \\dfsnamespace\shared.
Users currently have an issues with changes being lost on word documents which are edited by multiple users as they collaborate on these documents.
The problem is that if one UserA has the word document open then if UserB opens this document sometime later then he doesn't get prompted that UserA has the document open and UserB can only open it in read only mode. Thus UserB also gets a read-write copy of the document. Now if user User A saves the document with his changes and then later on UserB saves the document with his copy then the UserA's document changes are lost and UserB's changes are kept.

Have you seen this issue and what is the cause?

I ran the "Get-SmbServerConfiguration" and ensured that the "EnableOplocks" is currently set to "true" and has a "OplockBreakWait" setting of 35.
From these values I understand that if a windows 10 client machine is off the network for 35 seconds then this oplock will be released.
From these I understand:- 
1.  If the UserA's client's machine goes to sleep state then the lock will be lost and thus UserB will get a read-write copy of the document and thus resulting in this issue. Is this write?  (I tested this in another environment and this is the behavior I observed there as well)
2.  If UserA switches from wired to wireless network then because network has changed then his lock will be lost on the server. Is this correct?
3.  In scenario 2 if the user establishes connection with wireless before 35 seconds (when the old oplock from wired connection is released) then he will have to wait for 35 seconds to  get a new oplock and then he can save the document.Before that he can't save the document. Is this correct?

Also :-
a) How is the oplock created - Is it based on the hash of client & server IP? Or just uses client name?
b) If this word/excel document is shared. Will that make any difference to this behaviour.

I understand that Sharepoint is the best collaboration application which will resolve these issues but the client does not have sharepoint.

SYSVOL Migration from FRS to DFSR in a Multi-Domain Forest

$
0
0

Currently SYSVOL replicates via FRS and the team are planning to migrate to DFSR

I've done some research and reading for FRS to DFSR migration however, it only refers to a single domain forest SYSVOL migration, to run on PDC emulator domain controller. There is no specific procedure or process for a multi-domain environment which I assume to run it from the PDC emulator in every domain?

Questions:

a. Do I need to run this from each individual domain, and if yes, is there a recommended order?


How can I get ownership?

$
0
0

Hi,

I get win2008r2 as AD file server which a lot share folder

There is a one folder as attachment.

I login as administraror(domain)...

I would like to remove it,but it is access denied.

Then I try to take ownership as administrator.

But It can not....

Please advise...

storage spaces direct in combination with file server for general use

$
0
0

For our company am I looking for a big storage solution. We are producing videos and we need around 500 tb per file share. We need 3 file shares with the functionality of the file share for general use. We need quota, permissions etc. But when I have s2d enabled the only possibility is the sofs. I don't see any drives for failover role "file server for general use" what is the way to go? Do I need to use sofs or is it not possible at all?

btw. i like the bandwith increase with sofs so if i can use sofs i would be happy. 

IO Implications of Using Higher Allocation Unit Sizes

$
0
0

It's a pretty common practice for SQL backed file stores to be formatted with 64k allocation unit size. SQL sort of has its own file system that sits ontop of NTFS anyway, so its a very large single file.

But, is using that 64k as a standard for a regular file store a good idea?

I'm trying to understand specifically what is going on at like the disk read/write head level when it comes to picking an allocation unit. Let's say I have a volume and its using 64k allocation unit size. Let's say I have an application that is copying some files, but is also writing to log files frequently. Let's also say that for diagnostic reasons, the logger opens the file (for append), writes the log message, flushes, then closes the file. It might do this many many times during the session.

Given the above, Is this a true statement?: If the logger writes 10 bytes to the file, the entire 64k allocation unit is re-written (so IO of 64 bytes written occurred). If the logger writes a 1000 byte message, again, 64k is actually written.

Or, is it smarter than that, and will only write the specific number of bytes related to the file IO operation?


S2D two node cluster - Network Design

$
0
0

Hello Guys,

I want to install a two node S2D cluster and I have 2x 25Gbit RDMA-capable adapters on each server (and more adapters for LAN traffic). The question is just about my RDMA-traffic:

Usually, the documentations talk about creating a SET-Team out of my two 25Gbit/s adapters and building two virtual Adapters that act as two fault domains, for example:

vNIC: 'Ethernet (Storage1)' in VLAN 10 and 10.0.10.0 /24

vNIC: 'Ethernet (Storage2)' in VLAN 20 and 10.0.20.0 /24

The question is: Why should I create a team for the RDMA adapters at all? Wouldn't it be enough to just directly connect the two servers with two cables and configure a different network on each cable?

Thank you!

ente

S2D - Cannot remove storage pool

$
0
0

Hi!

I'd like to entirely reinstall a test installation of S2D. I tried following:

disable-clusters2d

-> Pool still there

remove-storagepool

-> Pool still there

cleaned the disks with some scripts available from MS 

-> disks cannot be cleaned and are not available anymore. diskpart do not show the nvme's

-> reinstalling a server -> disks still not showing.

-> purging disks with a linux distro

disks now available again, but the pool is still there in read-only.

-> get-StoragePool -FriendlyName "S2D on MSLI01-C06" | Set-StoragePool -IsReadOnly $false
Set-StoragePool : Access denied

-> removed the storage pool in the console and then tried again the upper command.

seemingly worked - at least no error message.

-> remove-storagepool

storagepool still there and still in read only and cannot be changed nor removed. 

Any ideas on that?

Storage Spaces Dilemma - what to do to existing pool to increase performance

$
0
0

Dear All

I have the following dilemma.

I have 24TB pool running single parity on WSE 2016. It comprises circa 12 JOBOD HDDs. the pool has been doing its job for couple of years. I migrated from WSE2012 and before that from Home Server 2011 and Home Server.

I have been reading how addition of 2no SSDs as journal drives enhances write speeds - which haven't been great but acceptable. My server has 16GB of Ram and when transferring large files, say movies up to 12GB, RAM was used as a write cache. However considering I haven't been doing anything with the server for years I decided entertain the idea of installing additional 2no SSDs and assign them as dedicated journal drives.

I followed example as found on dataon blog with some additional PowerShell commands to assign disks as SSDs.

The odd thing started happening when I completed the process. When I started moving large file (22GB 4k movie) from my pc to the server, the speed was 100% 1Gbps till the copying got stuck on 44%. I checked from the server side and can confirm that the RAM was utilised as a cache but not SSDs. what was the worst is that the copying never finished, just timed out. tried couple of times, to no success.

After extended research, I found that the max cache space utilised by existing pool when adding an SSD is 1GB. You cannot increase the cache size for the existing pool. This is the case for me, as both of my newly installed SSDs showed 0.6% of utilisation in Storage Spaces GUI.

To the question?

What to do?

- do I start detaching disks from my existing pool and create new pool with SSDs and gradually move data across?

- do I forget about incorporation of SSDs at all - one can say, for home use, why bother.

- I would love to type some magic commands in the PowerShell which would see my SSDs utilised a bit more (100GB is the limit for cache).

I had some bad experiences with moving data over and when I think of at least a week or two of moving data, I just loose hope. I will not last worrying that long. Is there a more convenient way of addressing my issue?

Looking forward to any tip or advice that could see my server performing a bit faster and would see those new SSDs paying for themselves

Andrzej 

Workfolders issues

$
0
0

Hi all,

Since the most recent official Windows 10 update (1803 April update) I'm experiencing some issues.

* In explorer, some pictures get thumbnails, some don't (as seen in the figure). Tried deleting the thumbnail cache but that doesn't help.

* The Photos app doesn't sync pictures from the WorkFolders anymore, and doens't show thumbnails anymore for videos.

* In exlporer the icons for 'Download state' aren't correct. Some folders have a 'Failed sync' icon, but all the files inside are downloaded correctly.

A clean install of the client operating system doesn't help. Anyone else experiencing these issues? Is a fix available soon?

Janjaap

the best way for securing a sensitive data on a file server from AD admins

$
0
0

hi,

what  is the best way  for securing  a sensitive data on  a file server from  AD admins ,  "trust does not exclude control"

PS  :  an  AD admin can take bak owner  on  any shared folder then  change the permissions.

thanks.


MCP - MCTS - MCSA - MCITP

Storage Replica - ReplicationStatus is WaitingForDestination

$
0
0

Hi, 

I'm just setup Storage replica and leave it sync for a month. Simple setup, 1 AD server, 1 Source server and 1 Destination server.

Everything is fine until one day I  and turn it off Destination server for couple hours.

I power it on and leave it for couple hours, check the status it always said "ReplicationStatus  : WaitingForDestination" 

Storage replica status WaitingForDestination

Just wondering is there any command or best practice to re-sync? Or I have to remove replication and redo it again?

Thanks in advance

S2D 2019: Fault Domains without SES

$
0
0

I have a 2-node S2D 2019 cluster. I still have an issue where 1 particular node going down is bringing down a CSV, although the cluster itself stays up due to the cloud witness I have configured. 

I'm on hardware that doesn't support SES (personal project), but S2D 2019 supposedly removed the requirement for SES. Does this mean that placement of slabs of data in a resilient fashion is now achievable? Is there a secondary requirement in the absence of SES to achieve this?

The failure reason for the volume was the pack does not have a quorum of healthy disks. Interestingly, the ClusterPerformanceHistory CSV stays up. Perhaps I configured my other CSV wrong?

I'm not opposed to adding a 3rd node, though it's not ideal. Would this negate the issue?

Any help or advice is appreciated!


Clean up DFSR folder after replication group reconfiguration

$
0
0

Hi guys,

DFSR replication group(s) were reconfigured and now almost 500GB of iles left under E:\System Volume Information\DFSR folder.

The System Volume Information is not visible and I only can see it in TreeSize Free software. 

Please advise if it is safe to delete files and and which folders I should delete.

Regards

How many days are covered with Shadow Copy if there is a limit ?

$
0
0

I'm using Shadow Copy on my drive which contains file share with cca 1 TB of data.

I set the limit for Shadow Copy to 60GB. I also set the schedule to take the snapshot 5 times per day.

My questions are:

1. How I can know how many days (or how many snapshots) I'm covered with thelimit I set ?

2. Is it possible to configure a separate drive where Shadow Copy snapshots are stored ? In this case I could add additional drive for Shadow Copy snapshots and there I could have bigger limit...

Many thanks !

NTFS and Shared folder access

$
0
0

Hi!

We have a 2012 R2 File server, on said server, we have a shared folder, with some subfolders, one of these subfolders have inheritance disabled, and its access restricted to a AD security group. A basic layout so to speak. The security group have full access, aswell as the built-in "Administrators" group on the server.

Under Advanced Sharing on the share itself, "Everyone" has read and change permissions, and the previously mentioned "Administrators" has everything.
As far as I'm concerned, the most restrictive folder access should be the one determining the users access, which in this case is the NTFS security AD group, however, something else is tampering with the permissions.

The issue:
Some users, random users, can still access and read/Write/change everything in this restricted folder, and I cannot for the life of me find a common denominator between these users.

don't allow user to change permissions

$
0
0

Hi,

I just wander if somebody could help me. I've created folder with restricted access to particular group (this group has full access) inside folder which everybody can open. So basically UNC path looks like that \\SERVER\FOLDER\RESTRICTED_FOLDER.

Now if user create folder inside RESTRICTED_FOLDER he can change permission on it, I mean add somebody and this person even if has no rights to RESTRICTED_FOLDER at all can access this folder using \\SERVER\FOLDER\RESTRICTED_FOLDER\NEW_FOLDER path. I was surprised when I discovered this as I've expected, when you have no permission to RESTRICTED_FOLDER you can't get access to NEW_FOLDER at all, but you can. 

Can anybody suggest me something to make sure user is not able to give permission to sub folder or another way to resolve this issue?

FSRM Quota Email Notifications

$
0
0

Hello,

I want to apply quota email notifications on some existing folders on windows server 2012 r2. The problem is when I mark more than one folder, the option edit quota properties is not available. Is there some other way to edit gouta notifications on multiple folders. I'd like to mark that folders do not have same quota limit(some are 2, some 5, some 10 gb hard quota...). The second part of my question is... I previously set owners of folder, what I would like is to set qouta notifications so that when some % of qouta is exceeded that also owner of that folder gets notification. So we are clear I do not want to type adress of owner, but insert it as Variable.

Thanks in advance,

Marko

Viewing all 13565 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>