Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Workfolders Error - Access to the cloud file is denied 0x8007018b

$
0
0

Workfolders is great when it works, but clearly it is such a complicated configuration with so many dependencies that when things stop working, its a game of Whac-A-Mole to get it working again.

The error messages aren't helpful, there are no troubleshooting articles, there is no reference for the error codes.

I have googled this error message and there are three entries in Google search. What the heck am I supposed to do now?


UNABLE TO COPY PASTE ANY FILE FROM ONE DESTINATION TO OTHER

$
0
0

I Am using windows 10. Now I am Trying to copy and paste my files to one place to other but unfortunately the pasted file is an error file that has been copied by me before 4 days. Kindly suggest me the necessary action through which i can come out from this problem.

AMAR MIHIR DASH


2008 r2 File permissions/security takes forever

$
0
0
Is there a way to set changing permissions back to the way 2003 worked or tweak 2008? If i make a top level change to a folder that has millions of files on a share it looks at every file one by one to remove a securoty group.

DFSR is replicating files that haven't changed, causing large backups.

$
0
0

I've got a couple of new Windows 2016 Servers in a DFS partnership with data migrated from another server. One server is "primary" and has priority, while the second server is for backup purposes if the primary goes down. The data was pre-seeded to each server using online documentation with robocopy and then put in a partnership (one at a time) with the original server until DFS stabilized and replication was working between all 3 servers. The servers hold 4TB of data with over 1 million files spread across several different replication groups.

I'm not sure if it is related, but during initial replication each new target server reported a ton of "Conflicted" files and retransferred them. I wasn't able to figure out why (the file hashes completely matched, and all the file properties looked the same). I followed the directions to the letter. So, I let it complete on its own over several days and fix itself.

Now, I've added a file-based, incremental forever, backup (Unitrends) to the primary server. It monitors the USN journal for file changes that need to be backed up. Those backups are inconsistent in size, because some kind of replication issue continues to cause tons of updates to the USN journal. The DFS servers appear to be "re-syncing" a large number of unchanged files. In addition, they will generate a large number of "Conflicted" files again. These are all files that are static and haven't been changed in years.

Some of the DFS debug logs were truncated so I wasn't able to go back as far as I would like. But, what I have seen is that before some of the really big incremental backups, the primary DFS server was shut down improperly and went in to DFS database consistency checks. This is what I have been able to piece together:

  • During database consistency checks, the primary DFS server requested a large amount of update info from the backup server, and a large number of updates were sent. It doesn't appear any data was transferred, but some type of file info was sent. I see log entries like this on the backup server:
20190205 12:19:16.666 7108 INCO  3364 InConnection::ReceiveVvUp Received VvUp connId:{02FDEEF7-6BF4-4843-BEAD-63913F05AF1C} csId:{9CC6A255-6567-4827-B69A-0FEAAC73604F} csName:Shared vvUp:{A46E672B-EAB9-4B12-AEF4-20C83853EE1A} |-> { 1880210..1880215,  1880218, 1880219, +	 1880221..1880223,  1880230,  1880300,  2095605,  2095804,  2099486, +	 2311562..2311928,  2313768, 2313769,  2313771,  2313778,  2313787, +	 2313789,  2313791,  2313797,  2313800,  2313805,  2313807,  2313810, +	 2313814,  2313818,  2313821,  2313836, 2313837,  2313841,  2313843, +	 2313845,  2313847,  2313855,  2313858,  2313862,  2313865,  2313872..2313874, 
20190205 12:19:19.494 4972 JOIN  1201 Join::SubmitUpdate LDB Updating ID Record:
20190205 12:19:19.510 4972 JOIN  1253 Join::SubmitUpdate Sent: uid:{47A2D072-9BC8-41C8-8F41-A66DD8BD22E9}-v5699713 gvsn:{47A2D072-9BC8-41C8-8F41-A66DD8BD22E9}-v5699713 name:removedforprivacy.PDF connId:{CD5C48CF-2662-499E-BA31-2DBFC77A1BF7} csId:{9CC6A255-6567-4827-B69A-0FEAAC73604F} csName:Shared
  • During this consistency check, the next incremental backup increased in size to 375GB. 
  • After the successful consistency check (Event 2214, 2002), the backup server began sending a ton of files to the primary server. The primary server began generating a large number of "Conflicted" files that absolutely haven't been changed in years.
  • The following incremental backup was 2TB.

Prior to this, I also had a 2.1TB incremental backup, even though no servers were shutdown improperly, and no significant events occurred in the event logs between the two backups (the DFS debug logs were truncated). The only thing being that this was the backup immediately proceeding the initial full backup (4TB) and that initial full backup took approximately 8 days to complete.

Why are these significant syncing events occurring that are generating large USN change logs?

Why are files that have not changed being re-synced / updated, and why are they being marked as "Conflicted," on the receiving end? What about the initial sync, and all the "Conflicted" files - is it related?

Is there some kind of check I can run to see if the two servers think their files are consistent and if they actually are - without triggering an entire resync? Is it possible the servers think they are in sync, but they are not actually - and how to determine that?

how do i eliminate edb files

$
0
0

how do I eliminate edb files in windows 10

DFS-R on 2012 R2 using the MinimumFileStagingSize switch

$
0
0

I have a DFS-R replicated folder configured between 4 servers. The folder that is being replicated will only contain a handful of files (maybe 12 max). These files are images in VHDX format that we use for our citrix environment and change only every few weeks. I would like to see about speeding up the time for replication when a VHDX file is modified. 

Taking a look through the DFS-R cmdlets it seems like disabling RDC and enabling MinimumFileStagingSize set to either -- Size64GB (18)  or -- Size128GB (19)  would allow the VHDX file to skip the staging process and begin replication immediately. 

Im pretty sure that with a single large VHDX file the RDC and staging isnt really helping me at all since it is one large 

Am I correct in my assumption or am I missing what the real meaning of this switch does. Any help is appreciate. 

Server 2016 Task Scheduler not working?

$
0
0

I'm trying to use Task Scheduler in Server 2016 to upload files to a vendors AWS S3 bucket. Nothing has worked using Task Scheduler.

If I run the .bat script on it's own it works just fine and the files are uploaded, but not in Task Scheduler.

If I run the same commands in a Powershell script on it's own it runs just fine and the files are uploaded, but not in Task Scheduler. 

Task Scheduler history logs says, "Task completed" as you can see from the image I attached.

Settings I have in Task Scheduler:

  • Run whether user is logged on or not
  • Run with highest privileges
  • Configure for Windows Server 2016
  • Repeat task every 5 minutes for a duration of Indefinitely
  • Enabled
  • Action: Run a Program
  • Program/script "C:\scripts\name_of_file.bat
  • Star in c:\scripts

Does Server 2016 have some kind of safety feature that I need to enable for this to work in Task Scheduler?

I am a Domain Admin.


Storage Spaces & NVMe performance issues - Windows 2019

$
0
0

I have a test environment which originally had the following configuration:-

2 x HP DL380 Gen 10 Servers running Windows 2019
4 x 1.4TB SSD - Cache

8 x 1.6TB HDD - Capacity

Storage Spaces etc and using VMFleet as my benchmarking tool building 20Vms on each host I was able to achieve the following results on a 4k 100% read test (all data in cache):-

We wanted to see if the Iops could be pushed higher so upgraded the servers to the following:-
2 x 1.4Tb NVMe
4 x 1.4Tb SSD
8 x 1.6Tb HDD

Building the same vmfleet configuration but specifying that NVMe was cache and setting SSD as performance the same stress test just produces similar iops etc

I have destroyed and rebuilt the configuration several times but am still seeing the same results which is confusing me as to whether I have a config issue or something else

Firmware as up to date as it can be on the physical servers (still waiting on a lot of 2019 drivers).

Any pointers as to where I should look to improve this are gratefully received.


DFS Server

$
0
0

We have move one server from one site to another but now DFS has stopped working.

Can you please give me some tips for troubleshooting it.


File Share rights change automatically

$
0
0

Hello,

I have excel file shared using AD and share by a department with 20 users and i had set access rights for the particular file read/write for 3 users and 1 of  the user is Mac user from that 3 users, others 17 users read only , when the mac users did make any changes to that excel file , all other users access rights changed automatically to read/write .

please advise.

Regards

Thevan Shanmugam    

DFS replication is only one way

$
0
0

Hi,

I have set up DFS replication between two servers, but the replication is working one way only. None of the members are read-only. On the member server from which file changes are not replicated to the other one, I see this in debug logs. Nothing wrong is in event log.

ReserveFileTransferFailed to reserve file transfer server context,no more connection contexts available.

Any comment is appreciated as this DFS R thing seems to be never working.

20190211 11:30:26.943 7360 RPCN  1900 [WARN] NetworkGlobals::ReserveFileTransfer Failed to reserve file transfer server context, no more connection contexts available. updateName:IMG_3357.JPG uid:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402653 gvsn:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402653
 connId:{F0A1E960-13E1-431A-B5B9-0B2D7D0BF948} csId:{36CCC4E2-8E59-4CC7-9813-E08F80EBF8A1} totalContexts:64 totalConnectionContexts:64
20190211 11:30:26.943 7360 SRTR  3013 [WARN] InitializeFileTransferAsyncState::ProcessIoCompletion Failed to initialize a file transfer. connId:{F0A1E960-13E1-431A-B5B9-0B2D7D0BF948} rdc:1 uid:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402653 gsvn:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402653
 completion:0 ptr:0000027E4ED90800 Error:[Error:9078(0x2376) InitializeFileTransferAsyncState::ProcessIoCompletion servertransport.cpp:2886 7360 C All server file transfer contexts are currently busy]

20190211 11:30:26.951 7360 RPCN  1900 [WARN] NetworkGlobals::ReserveFileTransfer Failed to reserve file transfer server context, no more connection contexts available. updateName:IMG_3365.JPG uid:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402682 gvsn:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402682
 connId:{F0A1E960-13E1-431A-B5B9-0B2D7D0BF948} csId:{36CCC4E2-8E59-4CC7-9813-E08F80EBF8A1} totalContexts:64 totalConnectionContexts:64
20190211 11:30:26.951 7360 SRTR  3013 [WARN] InitializeFileTransferAsyncState::ProcessIoCompletion Failed to initialize a file transfer. connId:{F0A1E960-13E1-431A-B5B9-0B2D7D0BF948} rdc:1 uid:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402682 gsvn:{F4616D18-1717-4C88-B95D-9EBA85BEE79C}-v1402682
 completion:0 ptr:0000027E4ED90920 Error:[Error:9078(0x2376) InitializeFileTransferAsyncState::ProcessIoCompletion servertransport.cpp:2886 7360 C All server file transfer contexts are currently busy]




File Locking PDFs while in use?

$
0
0
We have a AD and File Server running Windows 2008 R2. With file sharing I am having some problems with shared PDF files. Is there any way to lock the files when they are opened by a user so that no one else can make changes to them while opened? We are experiencing problems where more than one user has the PDF open and they are making changes and then saving over what the other user did or Acrobat X locks up completely and loses everything they had done.

Fileserver - Diskmanagement and Partition - best practice

$
0
0

Hello all,

Right now, we are running a fileserver (Windows Server 2016) with a really big data-disk (D:\) of round about 6,5 TB. This server is virtualized by VMware and the storage is located in the SAN.

Our storage administrator provides us one big LUN and we have created one big vmdk on this LUN. So right now, this server has two vmdk-disks (C:\ and disk D:\). On Disk D:\ we have created one Share, which is mapped by all endusers.

Now we had a discussion with our storage administrator, because he wants to rebuild the whole SAN/LUN environment. Instead of one big LUN, he will provide use 6 or 7 smaller LUNs (with round about 1TB) - because from his site, small LUNs are more comfortable and he can manage them (move) better.

What is the best practice for diskmanagement and partition designing in such a case?

Should we use all the LUNs as different disks (vmdks) and build one partition over all disks? (problem: when there is an issue with one disk, then the whole partition has a failure).

Should we use all the LUNs as different disks and build a partition on each disk? (problem: when using more than one partition, we have to create a share on each partition and therefore the users has to be mapped to all shares - normally we would like to have only one share)

Does anyone know what is the best solution?

Expand Disk

$
0
0

Hi All,

I hope someone is able to help.

We are seeing quite a few issues where disks are getting full due to WSUS updates. We run a scheduled task to clean up WSUS on a daily basis and this works quite well, however, we need to increase the capacity of this virtual disk.

The setup now is as below. As you can see its one disk with additional volumes. I have added an extra 20GB drive and want to extend DATAPAR1 (D:) but I don't get the option. Is there a simple way to extend D: to give it the extra 20GB I just created?

Regards.

Storage Spaces Single Parity Column Count

$
0
0

I am seeking help to define the correct PowerShell Syntax for creating a single parity Storage Space utilizing four disk drives of equal capacity (I.E. 4 x 2TB drives) in a single storage pool. I believe in theory this should result in approximately 75% of the pool for data, and 25% of the pool for parity info.

Furthermore, it is unclear to me if using the Windows 10 GUI for storage spaces will create the requisite 4 columns if  presented with four physical disks within a storage pool.

As I have researched this topic, it seems as though possibly, PowerShell will create a better more space efficient Storage Space than the Windows 10 GUI will.

Any help guidance will be appreciated.


Storage Replica - Log service encountered a corrupted metadata file

$
0
0

Hi,

I have a WS2019 stretch cluster lab running Storage Replica Async and I have managed to break the replication, hoping someone can suggest how best to recover from a scenario like this.

It was working fine, and I actually enabled Deduplication on the cluster file server and tested that out. It appeared to be ok, but then I attempted to move the cluster group to another node and at this point Storage Replica failed with this error:

Log service encountered a corrupted metadata file.

I assume that the cluster may not have liked the fact that there were writes going on at the time the async disk replication attempted failover -- whether standard filesystem writes or Deduplication optimisation I'm not sure.

Now that it is in this state, how do I recover? If I attempt to online the disk resource it comes online for a few seconds then repeats the above error. Is there a way to repair or reset the log without starting from scratch? Or do I just need to disable replication and recreate it?

Thanks,
Simon.

How to connect to windows server through my PC using FileZilla?

$
0
0

Hello,

Prior to my question I want to clarify that I'm a beginner in this field, so please forgive me if my questions appear stupid.

This is what I'm trying to do:

I have a local server where we store data and that I currently access through remote desktop (using IP, username and password). I want to use FileZilla Client to communicate with this server, so that I can create users, configure their permissions to read and write to a server location. I tried to connect to it through FileZilla Client using FTP but failed to do so. Am I doing this wrong? Is FileZilla server needed in this case? How do you think I can do this?

My server has  a Windows Server 2012 R operating system.

Thanks in advance!

Huge dedup Storage , compression rate 99%

$
0
0

Hi,

I have a file server on windows 2012 R2 with data deduplication active.

On the partition has 9 TB of space 8,9 TB are used by the System Volume Information so this is a dedup rate from 99% is that normal , if i look on the partition, there are only 62 GB used for data, all the rest in in the System Volume Information folder

I already run the GarbageCollection but no changes.

best regards

Michael

Upgrade to Server 2019 storage spaces problem - not migrated due to partial or ambiguous match

$
0
0

I upgrade a server from 2016 to 2019. After the upgrade one of my Storage Spaces drives went missing. The drives are showing in Device Manger but aren't showing Disk Management and the Storage Pool is gone. At first I thought the problem was a cheap controller (4 Port SATA in home test server) wasn't being supported in Server 2019. I swapped out with a LSI SATA / SAS card that has certified drivers. Still having same problem. All 4 disks are showing this error in device manager:

Device SCSI\Disk&Ven_ATA&Prod_INTEL_SSDSA2MH08\5&10774fde&0&000400 was not migrated due to partial or ambiguous match.

Last Device Instance Id: SCSI\Disk&Ven_Msft&Prod_Virtual_Disk\2&1f4adffe&0&000003
Class Guid: {4d36e967-e325-11ce-bfc1-08002be10318}
Location Path:
Migration Rank: 0xF000FC000000F120
Present: false
Status: 0xC0000719

This doesn't seem to directly be Storage Spaces issue but since they were that way in 2016 I figure it might be related. Anyone have any suggestions?

2008 r2 File permissions/security takes forever

$
0
0
Is there a way to set changing permissions back to the way 2003 worked or tweak 2008? If i make a top level change to a folder that has millions of files on a share it looks at every file one by one to remove a securoty group.
Viewing all 13565 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>