Commit Graph

42 Commits

Author SHA1 Message Date
leetNightshade e5a84235cb Fix issue where clientRoot is null due to multiple view mappings that
don't share one root. TODO: should probably leave getClientRoot to
return the "null". It's different than returning None.
2017-04-20 16:20:43 -07:00
unknown 972e9ca689 Fixed comparison issue, apparently had to make sure the number was an int.
Stupid fucking error.
2015-06-09 10:15:00 -06:00
leetNightshade 55e5033794 Fixed error, forgot to comment out a line. 2015-06-08 21:20:28 -06:00
unknown d5dc8155f5 Fixed up path limitation issue with p4. 2015-06-08 14:50:07 -06:00
unknown a5f82d5e00 Neatened up output. 2015-05-13 12:12:36 -06:00
unknown 26d1127e64 Neatened up console output a little bit. 2015-05-13 12:10:58 -06:00
unknown 92d217371c Fixed scripts up, improved logging so console has a waking thread now.
Also fixed bug if console timer is too long it'll be killed off
appropriately.
2015-05-13 12:06:54 -06:00
unknown ea14f96d76 Fixed bug in p4SyncMissingFiles.py. Also fixed bug in p4Helper when
running p4RemoveUnversioned.py.
2015-05-13 10:45:04 -06:00
unknown c32c0bfbd1 Added bucketing based on file type (text/binary) and batching to reduce
server calls.
2015-05-12 14:47:18 -06:00
unknown 49153babed Fixed output bugs in p4SyncMissingFiles.py. 2015-02-18 15:28:58 -07:00
unknown 6610e8e357 Accidentally committed pyc. Will have to add .gitignore. 2015-02-18 15:14:18 -07:00
unknown 1d1d7f8cae Split scripts up for now, may add all in one scripts later. Added
p4SyncMissingFiles.py, so you don't have to do a force sync and redownload
everything.
2015-02-18 15:09:57 -07:00
unknown 9d4d26250d Added a fix if the specified directory isn't added to the repo but still
in one, it'll be scanned and contents cleaned up.  The only thing is, as
of right now the folder itself won't be deleted, you'd have to run the
script from a higher directory.
2015-01-14 17:58:34 -07:00
unknown e7bb65874e Added more debug info at the end for files processed, cleaned up formatting. At some point will strive to make the output more UNIX friendly, parsable.
I also fixed a bug where the script would crash instead of setting the P4Client.

I need to fix the script to use a `with` construct so if you terminate the program the P4Client is returned to what it was.
2014-10-22 12:05:57 -06:00
U-ILLFONIC\bernst 06b0cbe426 Adding huge improvements. There are still a few more to make to account for computers not setup correctly, but it's functional. Still has the occasional console hang bug. Now also prints out run time. There is one new minor bug, reverting back to the previously set client view. 2014-08-13 17:09:19 -06:00
Brian 6236ead338 Adding new worker run type 2014-05-14 19:09:46 -06:00
Brian 3ffdd76147 Added basic worker thread back in, and TODO comments for multi-threading this new script. 2014-05-13 20:45:55 -06:00
Brian 59e010d682 Added a warning note for large depots. 2014-05-13 20:33:11 -06:00
Brian fd419089be See description. Why does this have to be so short?
Removed excess input of polling p4. Fixed quiet output. Added directory
removal back in. Made the output a little nicer, added singular and
plural strings, also added directory total output.
2014-05-13 20:18:16 -06:00
Brian 865eaa243d Removed creation of NUL file, annoying to get rid of. Also changed error formatting a little. 2014-05-13 19:08:53 -06:00
Brian 4435a36bed Made script obey quiet option. Added file and error count to print at end.
Also made sure the error output gets piped and doesn't show up in
console. However, we shouldn't ignore any error output, this should be
accounted for and properly logged. So, this is a TODO.
2014-05-13 14:08:16 -06:00
Brian 0dcd14a73b Working in Python 2.7.4 and Python 3.4.0, HOWEVER, Console isn't exiting correctly. 2014-05-09 19:11:23 -06:00
Brian 55a5e41b00 Improved the auto flushing, made it time and buffer size based.
In case a specific directory was taking a while, I changed it to auto
flush after a specified period of time.  Right now autoflush is
automatically disabled, you have to enable it when creating the console.

TODO:  I'll probably hook the console up to the stdout and stderr so you
can use ordinary print statements, we'll see. This is desirable for
easily hooking it into an existing module.
2014-05-09 17:36:49 -06:00
Brian c175b21dcf Grabs depot tree first hand to make looping through directory faster.
The big catch right now is, this method is single threaded, I haven't
made it multi-threaded yet, but it definitely looks like it can benefit
from it.
2014-05-09 17:19:44 -06:00
Brian 8d425d6413 Catch except in file iteration so you can continue processing remaining files.
The next changes will be ground shaking, a lot should be changing,
performance should increase significantly.
2014-05-09 15:21:56 -06:00
Brian b3b960e9ef Fixed console to exit properly, wasn't finished frankly.
Script now exits as expected.
2014-05-09 14:16:18 -06:00
Brian 4bb145e4ca Update README.md 2014-05-09 12:23:56 -06:00
Brian b3051f8dc8 Fixed mixing of unix/windows paths. Need to test this works cross platform.
Also removed PressEnter. Added global basename function so we can
override which version we're using, right now I'm seeing if
ntpath.basename works for all cases.
2014-05-09 12:15:32 -06:00
Brian 8bb78e7c02 Added exception catching with printing to thread-safe console. Removed press enter call.
Changed output a little bit.

Also just realized it actually should be easy to parse `p4 fstat ...`, I
just need to crab the clientFile output, and this script should be sped
up substantially. I need to figure out the best way to break this down,
don't want it to be called on a huge directory, but each subdirectory to
split up the work.  That said, that would miss the top level files.  A
good alternative to not waiting is to see if I can grab the process
output while it's working, instead of waiting for it to be done.  This
would actually work perfectly; it's just tricky trying to figure out if
I can break this up. This would also still delay the start of the
script.  Could do a mix of local and tree based fstat.  Start with local
and switch to the tree.
2014-05-09 11:51:59 -06:00
Brian d3fdef1342 Removed thread shutdown print, and bumped up thread count.
I haven't yet determined a good number of threads to use, we'll see.

Also have to change how the directories are being handled, kind of a
waste to push every directory to the queue, would be faster if the
batches were bigger.

I also still have to work on using fstat across a tree, this will bring
super speed ups.  Output is a bit different, parsing is more complex and
how we handle things will be a bit different.
2014-05-09 11:25:53 -06:00
Brian 3eb7a78339 Fixed a serious output bug, buffers weren't being cleared after flushed.
So I obviously feel much better about this version, looks like it works
how it should.
2014-05-09 11:12:41 -06:00
Brian 97da25ce38 Threaded console, threaded cleanup. Yes!
Made the threaded console to batch messages and so I could manually
flush or clear them.  At some point I would consider a safety maximum
buffer size that triggers it to auto flush. This worked out really well,
though I have to see why in some cases lines appear to double up still,
could be something with the process not completing when I expect it to.

This is possible a naive thread implementation, since it pushes a
directory for every thread which seems too drastic. I'd like to see how
much better it works without all the context switches.  It's also a
matter of figuring out how much to handle yourself before letting
another thread join in.  Right now the threads don't branch out too much
since I think they basically do a breadth-first search, though I have to
double check on that.

Still to come, trying to safely work with fstat across multiple
directories. It's fast, but on the console the script would appear to
stall as it parses everything, so I'd still want to break it down
somewhat so you can see the script making visible progress.  I would
also prefer this because then console messages wouldn't be so short and
blocky.

Improvements to come!
2014-05-08 22:55:12 -06:00
Brian 2b14c4a273 Working on threaded support. 2014-05-08 21:05:55 -06:00
Brian 1f4b52e3a9 Added the version of Python I've tested with.
I'm working on making sure it works in Python 3.4.0
2014-05-08 19:27:29 -06:00
Brian 80163cd15c Update README.md 2014-05-08 19:25:48 -06:00
Brian b327058ccb Added a fix for old now unversioned files with readonly set. 2014-05-08 19:17:36 -06:00
Brian 2bb0fa671d Fixed a bug from old code so the script would work, tweaked output.
I was trying to use `p4 have` for speed, but it doesn't seem to work
with files that are added to a changelist but not to a repo.  So I had
to resort back to `p4 fstat`.
2014-05-08 19:05:07 -06:00
Brian 266c5555ba Improved crawling speed, also cleaned up the output.
So the speed of the script is much faster than before, though it
actually still has much room for improvement, it will just be more
complicated.  Calling 'p4 fstat' on the entire directory will give you
everything you need up front, it's just they're in depot paths, which
makes thing a little annoying to parse when you have workspace mappings
that move things around so the local path may differ from the depot
path, and it becomes harder to determine 100% that you're referring to
the same file.  And I don't want to have to call p4 on every file to be
sure of that, what I'm doing now is the easiest safest way to be sure of
that, as far as I know.

Another way to speed this up is to add thread crawlers, I'm just not yet
sure with HDDs and SSDs how many threads is a good idea to use.
2014-05-08 19:05:07 -06:00
Brian 09a4811be4 Update README.md 2014-05-08 15:52:08 -06:00
Brian e2d660e486 Updated readme, added warning, also figured out I'm not parsing p4ignore correctly. 2014-05-08 15:44:26 -06:00
Brian 32aaab1578 Create README.md 2014-05-08 15:42:50 -06:00
Brian 27e2e32f7e Added the basic script. 2014-05-08 15:37:57 -06:00