I’m curious how most of you work with package management within your company / team. Do you mostly use dropbox and other file sharing services for this purpose?
In our company we use a server for the file storage and sharing. If we locate our packages and nodes on the server it slows dynamo down significantly. So we are looking for alternative methods.
Put your packages on a smaller drive, a big drive is probably hampering the performance.
For comparison, Are you able to post the specs of your server? How are you measuring and what is the performance slow down related to? (Placing package nodes?)
@Marcel_Rijsmus why would this be the case? smaller drive, less users, faster search?
Saw it on github, i had a case where only half of my packages loaded from a network location.
I can find the link when i am at the workplace but now i am at home, but i remember @Andreas_Dieckmann contributing to the issue, maybe he knows
@Marcel_Rijsmus You’re probably referring to this one: https://github.com/DynamoDS/Dynamo/issues/8246
@r.kilsdonk There’s actually an open issue regarding performance of node libraries on network drives: https://github.com/DynamoDS/Dynamo/issues/7848
We’ve never had any performance issues with that but other firms robocopy the node library to the users’ machines: https://github.com/DynamoDS/Dynamo/issues/8348#issuecomment-350768419
some problems solve themselves for no apparent reason, did you ever notice that?
Drive size is partially due to how dynamo runs the associated dlls of zero touch nodes off the server. Limiting your library to simple custom nodes solves that error pretty well but significantly limits your libraries function. For this reason I recommend partitioning the drive if possible.
Also stay away from the ‘cloud mirroring services’ as these have been at the heart of riot problems as the services don’t deal with dlls well.
The robo-copying is a good solution if you have a good gp that can drive this.
Thanks for the info guys. I see this is an open issue.
@jacob.small with ‘cloud mirroring services’ you mean services like dropbox and google drive right?
I have another question about how you guys manage the adding and removing of packages and nodes to the central library. If a user needs a certain package for a script does he then have to request that this is added to the central library? Or can a script copy the local library to the main one, so that everyone has the new packages installed/made by the user?
I hope i explained that clearly.
Thanks in advance,
@jacob.small I just tested Dropbox and it seems to work fine. Can you elaborate on the problems you encountered?
From the Dynamo 2.0 Release Notes:
If the network drive pointed to by “Manage Node and Package paths” is > 1TB in size loading Dynamo may take a minute or more, irrespective of actual package size. We recommend drive size of 100GB or smaller.
The cloud mirroring stuff is more of a reference to vortualized servers where every change is uploaded and downloaded to/from a cloud service and synchronized to multiple offices. I believe Panzura is one such system but I haven’t confirmed yet.
I have personally had issues with Dropbox and onedeive hosting graphs, packages and content accessed by them, but with one user accessing at a time this was no problem. It’s the multiple users attempting multiple pushes where it becomes an issue.
My statement was more towards run times than opening, but yess 100gb is the ‘high end’ of the size based on current info. This was smaller by a large factor a few months ago.
Yes this a very large network drive so loading takes a long time.
Okay thanks. We’ll be using dropbox then. There is only a small group working with dynamo within our team, so that won’t be a problem.
@r.kilsdonk Check out this webinar I gave with a few coworkers back in late May. I was waiting on a ‘special link’ from corporate but that doesn’t appear to be coming (been about a month now) so I’m done waiting.