Quantcast
Channel: File Services and Storage forum
Viewing all 13565 articles
Browse latest View live

Windows Error Reporting - Event ID1001

$
0
0

Hello All,

In one of the server am getting "Windows error reporting" event (mostly 4 alerts per hour). Can anyone confirm, these alerts are good to ignore or any way to suppress/resolve these alerts?

Please find the details information about the event.

Fault bucket , type 0
Event Name: WindowsUpdateFailure3
Response: Not available
Cab Id: 0

Problem signature:
P1: 7.9.9600.18970
P2: 80072ef3
P3: 00000000-0000-0000-0000-000000000000
P4: Scan
P5: 0
P6: 1
P7: 0
P8: AutomaticUpdates
P9: 
P10: 0

Attached files:
C:\Windows\WindowsUpdate.log
C:\Windows\SoftwareDistribution\ReportingEvents.log

These files may be available here:

Analysis symbol: 
Rechecking for solution: 0



vicky



File Server Resource Manager PowerShell module missing in Windows 8 RSAT

$
0
0

It appears that the File Server Resource Manager PowerShell module does not get installed with the final version ofRSAT for Windows 8 (not sure if twas this was the case with the preview versions or not). Everything else appears to be there as far as I can tell - dirquota.exe and the FSRM MMC snap-ins get installed. I tried toggling the feature in "Windows Features" but it did not help. 

Is anyone else seeing this? Is this by design or a mistake in the packaging?No FSRM PowerShell module in system32\WindowsPowerShell\v1.0\Modules

Thanks,

Doug

2012 R2 Deduplication Not Optimizing Anymore Files

$
0
0

We have a 2012 R2 server with a 16TB iSCSI drive mapped that has deduplication enabled. We are using it for long term archives of backups of Hyper-V virtual machines. Veeam is the application doing the backups.

It was working normally for about 2 months or so, but for the past 3 weeks it hasn't optimized any new files. It has been stuck at 233 files optimized for the past 3 weeks. During troubleshooting I saw there were about a dozen or so ddpcli.exe processing running. It wouldn't end them, so I stopped the deduplication service and it was stuck at stopping, but I let it run overnight and when I checked it in the morning it was up to 238 of 450 files optimized. That was two days ago and there hasn't been any more files optimized. I tried doing a stop-dedupjob to stop all jobs, then run a start-dedupjob -type optimization -full and let it run overnight but still no more progress. I also did a garbage collection and scrubbing. That freed up some space, but no more files optimized.

Any ideas on what else I can do?

Volume Deduplication

$
0
0

I have a virtual file server in my environment that has a drive filling up.  My usual process is to run WinDirStat to see what can be cleaned up before provisioning more storage. 

To my dismay - I found out that data dedupe is turned on for this volume!!  I can't actually see which users (this is a home directory share) are actually consuming the most data.  Is there anyway to tell when this was turned on and by who?  Is this ever auto enabled?

When I look in the server manager I see that there are "Deduplication Savings"; however, file explorer shows the disk at 80% capacity with "chunk store" taking up the majority of the actual space within a hidden System Volume Information folder.

I cloned the file server.  Next I disabled dedupe on the drive within server manager.  Next I ran the powershell comand:

Start-DedupJob -Volume"E:" -Type Unoptimization

This command causes the server to blue screen.

Any suggestions?


JLC

Timeout whilst running Get-StorageSubSystem Cluster* | Get-StorageHealthReport

$
0
0

When we run the Get-StorageSubSystem Cluster* | Get-StorageHealthReport command i get the following error on one of our clusters.

Invoke-CimMethod : Timeout
Activity ID: {6c63d452-e4e6-4270-a8ca-49514d62e5c7}
At C:\Windows\system32\WindowsPowerShell\v1.0\Modules\Storage\StorageScripts.psm1:3223 char:13
+             Invoke-CimMethod -CimSession $session -InputObject $sh -M ...
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationTimeout: (StorageWMI:) [Invoke-CimMethod], CimException
    + FullyQualifiedErrorId : StorageWMI 3,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand

This command runs fine on our other cluster but for some reason we get the above error when running it on the other cluster.

Cheers,

Liam

Windows 2016 workfolder and file locks "adxloader.log"

$
0
0

Hello
a brief request for a best practices.
In the case of the problem, we use W2016 std with Workfolders, the Clients are W7 or W10 Prof. with Workfolders and Outlook 2010 or 2016.
Outlook always writes the adxloader.log to the directory structure that is in sync with Workfolders. As a result, there are constant problems with failed synchronization.

My question would be:
Is it possible to maintain an exclude list with the Workfolder so that the adxloader.log is not included? (preferred variant, because also usable for other similar cases)
Or should you routing the path of the adxloader.log? (would be suboptimal for me because it would bend something in the MS Office and might not be update safe.

Who could give me a hint?

Thanks for all.


Danke und liebe Grüße Oliver Richter

DFS namespace - Clients connect to random server

$
0
0

Hi,

I have created a namespace in my Domain \\domain\files, which points to a replicated share on 3 fileservers (in our 3 sites). Now, when a client executes net use x: \\domain\files, it connects to a "random" server it seems to me.

The 3 locations are recognized correctly in the DFS management for the 3 servers.

How can I diagnose further why that happens? What additional data do you need from me?

I want the behavior that all clients connect to their local file server (unless it's unavailable, then its ok to use another one).

Thanks!

VJP IT

Command line or Powershell cmdlet to replace ownership on folder

$
0
0

Hi,

Is there a cmd or powershell cmdlet for the below option? I used takeown /R but it does not take ownership on all subfolders. However, if I try with GUI, I get no error message and it completes successfully.  I have 100 of users folders to delete. Please help.

 "Replace owner on subcontainers and objects"

Thanks,

Umesh.S.K



Query on SMB1 Deprecation

$
0
0

Hi,

I have 2008R2,2016 DC in my infrastructure and my clients windows 2003 and 2003R2,2008R2, Windows7 and above.

soon 2008R2 DC will be decommissioned. Due to application dependence still we are retaining 2003 and 2003 R2 servers.

I have a query in disabling SMB1 for my above mentioned clients and servers. Does it recommended to disable SMB1.

If I disable the SMB1 will my users face any issue in accessing the share folders on legacy servers.

Please assist.

 

loss of data on a shared network drive

$
0
0

hi,

i am having a problem on my shared folder ,on Monday i came back to work only to find that some of the folders in my shared areas were missing. so i tried to go check on my back up drive only to find the same thing has happened.

This is so weird,i had recently checked the back ups and they were fine. 

Storage Spaces Direct Cluster Aware Updating Behaviour

$
0
0

We have a 4 node S2D Hyperconverged cluster.

When running Server 2016 CAU on the cluster the Microsoft documentation tells us that it is S2D aware and that it will only reboot a cluster node when the storage is healthy.

We are not seeing this behaviour, S2D is rebooting nodes while the S2D virtual disks are listed as degraded and repair jobs are still in progress. Are their any known issues or hotfixes available for this?

For information, we are running the CAU gui wizard from a Server 2016 OS virtual machine.


Microsoft Partner


move deleted files

$
0
0

i am currently configuring a server that will only house replicated folder from other servers, i would like to setup a folder that will store file that have been deleted from other folders and would store files that have been deleted from the servers and keep the in case the need arises to restore the files without having to rely on the files getting backed up and then having to go through the backups to find the file to restore.

i have been using DFS and have the replication setup but can find any information on moving files that have been deleted to a separate folder, firstly is this possible? and if so cam someone point me in the direction of finding the information i need to set it up?


Chris Mottershead

Ricoh Aficio MP C2051 Scan to Folder - Windows Server 2012 Error: Authentication with the destination has failed check settings

$
0
0

I have recently upgraded a clients servers to Windows Server 2012 & since doing so have lost the ability to scan to folder.

Both servers are domain controllers and previously on a 2008 domain controller I would have had to make the following change to allow scan to folder:
 Administrative Tools
 Server Manager
 Features
 Group Policy Manager
 Forest: ...
 Default Domain Policy
Computer configuration
 Policies
 Windows Settings
 Security Settings
 Local Policies
 Security Options
 Microsoft Network Server: Digitally Sign Communications (Always)
 - Define This Policy
 - Disabled

However I have applied this to the Windows 2012 server but am still unable to scan, possibly due to added layers of security in server 2012. The error on the scanner is Authentication with the destination has failed check settings.
I have also tried the following at the server:
Policies -> Security Policies
Change Network Security: LAN Manager authentication level to: Send LM & NTLM - Use NTLMv2 session security if negotiated.
Network security: Minimum session security for NTLM SSP based (including secure RPC) clients and uncheck the require 128 bit.
Network security: Minimum session security for NTLM SSP based (including secure RPC) servers and uncheck the require 128 bit
I have created a user account on the server for the ricoh and set this in the settiings of the Ricoh and verified everything is correct.

Are there any other things I have missed?

Error 0x80070299 copying file to ReFS

$
0
0

We're retiring a 2008r2 file server using NTFS and migrating the files to a Server 2016 server using ReFS.  Using robocopy to pre-seed the files, a small percentage of the files failed to copy.  The robocopy log reported:

ERROR 665 (0x00000299)...The requested operation could not be completed due to a file system limitation

Trying to manually copy the file generated the error:

Error 0x80070299 the requested operation could not be completed due to a file system limitation

The files copy correctly to NTFS volumes, but trying to copy to any ReFS volume on any server generates the error.  If I copied the file to a FAT32 partition (to strip the NTFS metadata), it would then copy to an ReFS volume with no error, but trying to strip the attributes by going to the file properties, Details tab, and using the "Remove properties and Personal Information" option had no effect (it still failed to copy).

I was able to narrow it down to the presence of a particular Alternate Data Stream (ADS).  All the files that failed have an ADS called "AFP_Resource", which is apparently for Mac compatibility (https://msdn.microsoft.com/en-us/library/dn392833.aspx).  If I remove that data stream or clear the contents of it, the file will then copy with no error.

However, we have a lot of files that also have that ADS that do copy successfully.  We have a fair number of Mac users, so I'd prefer not to remove that data stream from all the files that have it.  Ideally I'd like to remove whatever is problematic about the data stream and leave the rest intact.  Alternatively, it would also be helpful if anyone could re-assure me that removing that data stream won't negatively impact our Mac users.  I suspect it's not important, but I'd rather not find out by stripping the stream from thousands of files and end up getting a bunch of phone calls. ;)

I suspect I'm going to end up using robocopy to identify the problematic files and then script removing this ADS just from those files, but if anyone has more info on this I would love to hear it.

More info below for those who might also be struggling with this.  It took me a few hours to track this down, so hopefully this will save someone else some time.

You can see what alternate datastreams exist for a file using either of the following:
dir /w
get-item <filename> -stream * | select Stream,Length

Remove a data stream:
remove-item <filename> -stream <stream name>

Clear contents of a data stream:
clear-content <filename> -stream <stream name>

View contents of a data stream:
Clear-content <filename> -stream <stream name>

Decent blog post explaining NTFS attributes (particularly $DATA, but also $STANDARD_INFORMATION, $FILE_NAME, etc.):
https://blogs.technet.microsoft.com/askcore/2009/10/16/the-four-stages-of-ntfs-file-growth/




mirror accelerated volume Win2016 standalone (NO s2d).

$
0
0

I know that this is a non supported scenario, but some month ago I created a mirror+parity REFS storage tiers on a windows 2016 standalone. I did not receive warnings during the creation about any misconfiguration.
As you can see in google I'm not the only one in that situation. (link

So I'm not in a standard win2012 storage tiers and nor I'm in space direct.

But the system seems to work well for months!

How can I confirm that data regularly rotate between hot and cold tier? How can I know if the cache space is enough ? How can I know if the SSD tier is too small or even full?

The optimization task is not supported, but in performance monitor, storage spaces counter I see(or it seems) that data continuously rorate between tiers.

dataCollector
 

This is the powershell script I used

New-StoragePool -StoragePoolFriendlyName "Pool1" -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true) -LogicalSectorSizeDefault 512 -FaultDomainAwarenessDefault PhysicalDisk

Set-PhysicalDisk -Friendlyname "WDC WD30EZRX-00MMMB0" -MediaType HDD

New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName SSD_Tier -MediaType SSD -ResiliencySettingName Mirror

New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName HDD_Tier -MediaType HDD -ResiliencySettingName Parity

$ssd_tier = Get-StorageTier -FriendlyName SSD_Tier

$hdd_tier = Get-StorageTier -FriendlyName HDD_Tier

New-VirtualDisk -StoragePoolFriendlyName Pool1 -FriendlyName "DATI" -WriteCacheSize 3GB -StorageTiers @($ssd_tier,$hdd_tier) -StorageTierSizes 249GB, 4560GB

Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True







How to clear DFS backlog

$
0
0

Hi,

I have set read only replication folder ( w2k12) and noticed that lot of backlog so how do we clear this?

bpo.com\corpdata\contracts From: SCSFS11 To: SCSFS12 Backlog: 954
bpo.com\corpdata\contracts From: SCSFS12 To: SCSFS11 Backlog: 0

As


Script to add update netbios target to FQDN targets in DFS using DFSCMD/DFSutil

$
0
0

Hi ,

     Need help.I have got 1 namespace running on 3 servers.Earlier these were set up to use netbios name for the folder targets .Now we would like them to have FQDN .

Here is what I am exactly trying to achieve:

1. Add FQDN target to each folder with a NETBIOS target. Some folders have FQDN targets so they should be skipped.
2. Then remove NETBIOS target where FQDN target has been added in step 1.
"dfscmd /view" can be use to make a list of targets that can be used to perform the above two steps in sequence, with a script, one folder at a time.
eg. for /f %i in ('dfscmd /view .... ') do dfsutil target ... 

Only reason to use the script is not have any downtime and namespaces can remain online.

Appreciate the help in advance.

Regards,

Himvy




Server 2012 R2 Network share - search / indexing issue

$
0
0

Hello,

I have a network share the was moved from a Server 2008 R2 to a Server2012 R2 system. The issue is searching the network files on the new system. I can open 2 explorer windows, one to the old share, one to the new share and search, as an example, *.pdf. On the new share it will list, again for example, 4,000 files, when I search the old share the search returns 165,000 files. Indexing is on, the desired folders are selected, and I have already rebuilt the index several times. This has had absolutely no effect on the issue. **I will not rebuild it again because it takes 36-72 hours to complete** during this time all searching is disabled. This folder is search by multiple users daily. **If "rebuild the index" is your solution please don't bother posting.** For some workstation systems if they check the "Don't use index to search..." box in search options, search in the new network share will list 165,000 files. This does not work for all systems (only seems to work for Windows 10 systems). These folders have 2-way file replication enabled so they should be exact copies. It seems automatic indexing is not adding all of the files. Here is the question, how do Iforce windows to index specific folders and files. I have seen this issue reported in several posts across various Microsoft forums but I have not found a working solution.

I am not allowed to upload screenshots or I would have.

Update multiple folder targets in DFSR

$
0
0
Is there any script to rename multiple/many folder targets in DFS in one go? Like through a script or any automated way?

Slow Copy Speeds

$
0
0

Perhaps this is answered somewhere, but I only seem to find discussion and potential solutions that aren't across many years.

I currently am transferring a large amount of data (10TB) from one Server 2012 to another of the same.  The switch is a gig-switch, the NICs are both gig-NICs.  It is taking more than 3 days for the copy, or to use perhaps an easier number, more than 9 hours to transfer 1.2TB.  In the first minute or so the line was saturated.  Then, rather quickly, it dropped significantly, hovering back and forth generally speaking on either side of 35.0 MB/s.  Nothing else is running on either server other than typical overhead processes, and during this process there have been no outside connections.

What am I doing wrong?


Rookie

Viewing all 13565 articles
Browse latest View live