The window is divided into a green and blue area. Files that should be moved or copied can be dragged into the green area while the blue area is reserved for the target directories. A right-click on the tiny interface opens the main interface. It lists all the files and targets that have been dragged and dropped into the application up to this point.
It is possible to remove some or all of the files and targets again, and to specify if the task should by copying or moving files. The only other options are to define rules for the situation where identical files are encountered in the target directories.
Choices are to either overwrite those files or to rename the new files automatically. A click on the Do the task button starts the file copying or moving process. The program launches a status report in the end of the process which highlights how successful the operation was.
A click on Return on the other hand displays the small overlay window again that can be used to add additional files and targets to the program. The program uses less than 3 Megabytes of computer memory while running. A program that works similar to n2ncopy is Piky Basket. It allows you to add files or folders to a copy or move job, to run the operation in one go afterwards. N2ncopy is a handy program for Windows that improves file copy or move operations if multiple source locations or target locations are required.
The program is a bit cumbersome to work with, as you cannot add targets to the main program interface directly. The newest version of robomojo has command line support. I'm going to try just creating two robocopy sessions 5 minutes apart to the respective destinations and see how the performance is. You could set 5 minutes spaces in your task schedules. But what if one runs longer than 5 minutes? I created two robocopy tasks to run 10 mins apart and so far everything seems to be fine; no IOPs issues that I can see at the moment.
Supplementing what others have said, if you are copying the same source data to multiple different physical disks or arrays you should set up all the robocopy jobs and start them as close together as possible. Whichever job falls behind will ordinarily catch up as it benefits hugely from the disk cache read hits that the leading job leaves in its wake. This keeps the jobs in sync and, because of the constant cache hits, means the source data will only be read from storage once.
To continue this discussion, please ask a new question. Get answers from your peers along with millions of IT pros who visit Spiceworks. Microsoft Technet Best Answer. ForEach fileDetail. SourcePath, FileMode. Open, FileAccess. Read, FileShare. Read buffer, 0, buffer. Improve this question.
Community Bot 1 1 1 silver badge. Pete Pete 95 8 8 bronze badges. Add a comment. Active Oldest Votes. I thought I'd post my current solution for anyone else who comes across this question. DestinationPaths Directory. CreateDirectory Path. Read, bufferSize, FileOptions. Create, FileAccess. Write, FileShare. WhenAll outputs. Improve this answer. ReadAsync buffer, 0, buffer. And you should await the resulting tasks all the way.
You may find more information here : Parallel foreach with asynchronous lambda Nesting await in Parallel. Thanks for your help VMAtm - I've been tweaking my code for the past week and did manage to get a read once with parallel async writes using Task.
I still can't seem to reach the speed that I've seen in another. Do you know much about IO buffers - I've noticed if I push the buffer to something larger like 60 mb, instead of my usual bytes then it's quicker but I'm not sure a buffer that large is a good idea?
For a coincidence, I did some investigation for the buffer size today, and as far as I can say, you should measure the performance for a larger buffer, some people say that is okay, rather than other suggesting up to KB. So it is up to you, just measure it. Ok, it seems like I just need to do some testing with buffer sizes.
0コメント