Brian
266c5555ba
So the speed of the script is much faster than before, though it actually still has much room for improvement, it will just be more complicated. Calling 'p4 fstat' on the entire directory will give you everything you need up front, it's just they're in depot paths, which makes thing a little annoying to parse when you have workspace mappings that move things around so the local path may differ from the depot path, and it becomes harder to determine 100% that you're referring to the same file. And I don't want to have to call p4 on every file to be sure of that, what I'm doing now is the easiest safest way to be sure of that, as far as I know. Another way to speed this up is to add thread crawlers, I'm just not yet sure with HDDs and SSDs how many threads is a good idea to use. |
||
---|---|---|
README.md | ||
p4RemoveUnversioned.py |
README.md
p4RemoveUnversioned
Removes unversioned files from perforce repository. Script is in beta, though it works. It needs to undergo a partial re-write for super speed improvements. Right now doing individual p4 fstat calls, when it should be done on an entire directory.
This script does parse .p4ignore ignore files, compiles the fields as regex, and scans every directory and file against the local and parent .p4ignore files. This is my first time doing something like this, and I just realized this isn't actually correct; I need to update how things are ignored to follow the spec, since it's not straight up regex.
Files are currently permanently deleted, so use this at your own risk.