Monitor SLOG to manage Synchronizing

Hello Dynamo Friends :slight_smile:

I want to solve the problem of simultaneous syncing to central and therefor I have built a graph that monitors the SLOG* file. Works pretty good! It gives the user one of the following messages after running it:

How am I doing that?

  • I search the slog if the last STC entry is the close entry “<STC”.
  • If yes, you can sync, if no you can`t.
  • I get the user by searching for the last session ID and then for the first entry of that ID.

What else do i want to do but don`t know how:

  • I would like to loop the graph to check the slog until the active syncer is finished so the syncing will then start automatically.
  • I would like to get all users that are currently in the model.

import io
filePath = IN[0]
file = io.open(filePath, "r", encoding = "utf-16(LE)")
string = file.read()

STC_lastindex = string.rfind("STC")
STC_substring = string[STC_lastindex-1:STC_lastindex+3]
bool = STC_substring == "<STC"

session_lastindex = string.rfind("$")
session_substring = string[session_lastindex:session_lastindex+8]
session_firstindex = string.find(session_substring)
user_index = string.find("user=", session_firstindex, session_firstindex+100)
user_substring = string[user_index+6:user_index+10]
user_mark = user_substring.find("\"")
if user_mark == -1:
	user = user_substring
else:
	user = user_substring[0:user_mark]

OUT = bool, user

Happy about any advice :slight_smile:

*How to read the Revit Worksharing Log .SLOG file | Search | Autodesk Knowledge Network.

1 Like

One quick note: Worksharing monitor will likely perform better than this will in the context of non-cloud and non-Revit Server projects. For those two, SLOGs don’t exist so there isn’t a way to watch for concurrent syncs. It does the list of users and concurrent read as requested too.

For looping, Dynamo isn’t really built for this. It is more of a “run once and get results once” type environment. You could try to comb the SLOG via Python and use a while loop that reads the last 100 lines or so, closing the loop when the value is set and then start the Sync command. However the result on that would be slower than what you get with OOTB tools now, and could result in concurrent syncs at as high of a rate as you have now (you start the tool, I start the tool, and George is already syncing)…

You can get the session starts pretty easily by looking at the >Session Iines, tying the session ID back to the session information. However the close lines aren’t as reliable as crashed sessions don’t get closed out in the SLOG (or the journal for that matter); you an use them and some other relevant info (ie: users can’t be duplicated in the same model, sessions shouldn’t be longer than 12 hours, etc.) to filter out most items, but some crashes may still remain.

6 Likes

WSM is what it is, a workaround for the simultaneous syncing problem and not a solution.

As i can´t even use this workaround for REASONS i have to go another route. And in my opinion getting a “you can´t sync now” message is better then looking at the WSM and wait.

I don´t want to use and i can not use the WSM.
I would like to use a plugin that queues the users for syncng but i can´t.
So all i have is dynamo so i want to get all out of that i can.

Simultaneouse syncing in large models with ~10 users is a really big problem for us, so i was hopeful that it is possible to improve that with dynamo.

I never had a crash during syncing so i don´t really understand why this should be a problem. And if it happens why should i not be able to identify that?

and could result in concurrent syncs at as high of a rate as you have now (you start the tool, I start the tool, and George is already syncing)

I don´t get that, i don´t think a real simultaneous run of the dynamo script is even possible, i could even log which user was the first one. I i can even log which users are waiting for the sync and in which order they started the script…

I´m at the abolute start of testing, only 2 users, and i will learn and progress with more data.
As always I´m very thankful for your comments @jacob.small :slight_smile:

edit: i never use a while loop, i`ll look into that!

What is stopping you from going the route of an add-in? As painful as WSM is Dynamo is likely going to get worse performance, sad as it may be to say, unless you couple it with an external server. Basically have the app request a sync command to another system, which queues you up until the previous user’s swc is done… this is part of what Revit Cloud Worksharing does, on some levels anyway as that system will prefetch changes to the model before committing, making the sync faster in most situations. That would also make logging pretty much cut and dry.

I strongly believe that Dynamo is likely the wrong tool for that job here, and that WSM (as problematic as it may be for you) or another add-in would be a better route for you. WSM and Dynamo would both be limited by the capabilities of the SLOG system (in fact I doubt that Dynamo can send and receive data through the network fast enough to keep up with the pace that slogs update while keeping the system’s worksharing online).

All of that said, from what I have seen in terms of raw data and field experience slow syncs are almost always the result of users not syncing often enough. This is my opinion based on both anecdotal observation and validated data (sadly I can’t share said data as it’s customer owned). Looking at the frequency of syncs per model and user would likely be more beneficial in the near term, as that data is in the slog (count the SWC actions grouped by session id, and divide by the length of the session). Look for a sync every 10-15 minutes of session length. Longer then that slows everyone down eventually, even if you’re just viewing things (as you have to download more data it takes longer to incorporate everything into your copy of the model, and it still has to send all of those changes back to central).

The crash situation I mentioned wasn’t an issue with syncs (if that ever happens the central would likely corrupt to some degree), but more with tracking ‘who is in the model’ at any given point. This is because thge session doesn’t close in the SLOG until the user closes the model, which doesn’t happen when someone crashes. When this happens the user’s session would still read as active in the slog, because there wouldn’t be a ‘session close’ action. Filtering by date and removing duplicate users can resolve some of that though.

My thought on the ‘we both use the tool’ resulting in concurrent syncs… if you hit go and I hit go at about the same time while Greg is already syncing, the system wouldn’t know that I have to wait for you after Greg finishes or vice versa. You need an added step of an intermediate file which manages the queue to prevent concurrent access. Sending the session ID to another file and appending to that until it’s your turn (and then deleting your line) might resolve this, but a “server” application to manage this would be best; it’d remove the while loop and make the coding pretty straight forward.

ON the loops front… Careful with your foray into while loops; they can put you into an inflate loop quick if you aren’t careful. Always code an end point first (ie: after 10 iterations) or similar. And if you’re going to go back and re-read a file give it a reasonable wait period; say 500 or 1000ms between executions. Otherwise you might rate limit yourself.

2 Likes

If plugins are too hard maybe consider using hooks in pyrevit. Youre already doing pretty advanced programming here that could be translated into python if not an addin once youre ready. Personally I’m finding it a lovely middle ground platform between dynamo and addins, and its also helping me learn about concepts that lead into app dev like wpf.

Ps - cool idea and outcomes!

2 Likes

International Company, Plugins by affiliated company, no collaboration with IT or BIM managers possible and the word dynamo is better not spoken out loud^^ I was asked to withhold my opinion in general :smiley:

Man there are some bung IT departments and companies out there right now when it comes to innovation. Keep an eye on the job market if they don’t change or take your ideas on board, for some firms out there these platforms are already business as usual.

3 Likes

waowww probably it gives some problem sometimes :wink:

1 Like

yeps IT guys :wink: i had worked for so called international firm as well
where everything was locked even the icons on the desktop and no changes to get some admin rights…i couldnt do my job there so i gave them a letter where i quit and then suddenly they could give me rights ;)…but anyway i dont works for firm there dont trust me…

1 Like

Very much this. They are costing themselves money, and will likely be in poor enough shape to be bought out (if not worse) soon enough from the much larger international companies people like me are training users up in daily.

1 Like

Yes Jacob well said so agree :wink:

Thanks for your replies, glad to hear there are other companies out there. For now I´m happy with my coworkers and i like to help them with my scripts :slight_smile:

@jacob.small , thanks for your input! But I´m curiouse what i can achieve with dynamo, maybe my test will confirm all your doubts, we will see!

A first draft for syncing a user automaticaly after the first syncer has finished:

  • Get slog filepath (now with python)
  • Get sync status from slog
  • write sync log to csv
  • read sync log from csv
  • check if the active syncer is the last one in the sync log
  • if yes start syncing with a while loop

the while loop and testing is the task for tomorrow.

CentralFilePath_dfs = BasicFileInfo.Extract(doc.PathName).CentralPath
CentralFilePath = "P:\\"+(CentralFilePath_dfs.split("\\dfs\\")[1])

FileNameList = CentralFilePath.Split("\\")
FileNameListLength = len(FileNameList)
FileName = FileNameList [FileNameListLength-1]
SlogFileName = FileName.Replace("rvt","slog")

ProjectList = FileName.Split("_")
Projectnumber = ProjectList[0]

SlogFileFolder = CentralFilePath.Replace(".rvt","_backup")
SlogFile = SlogFileFolder+"\\"+SlogFileName

OUT = SlogFile,Projectnumber
1 Like

Today i tried to put everything i can in python, used my first try, except and while commands.
But now I´m struggling with the while loop when trying to test text from a csv file.

Here is test while loop that seems to work, even with a break after a specific time amount. A sleep command can be added to dont overload the CPU.

import time
timeout = time.time() + 10 #seconds

x = 0
while x < 100000:
    if time.time() > timeout:
        break
    x += 1

So i get the concept of while.

Now i want to test a text in a csv file. The loop should run until i change the text in the csv file. It does not work so here the code and my explanation.

import time
	
path = "C:\\Users\\GEPA\\Desktop\\fruit.csv"

i = "string"
while i=="banana":
	with open(path) as csv_file:
		csv_reader = csv_file.readlines()	
	i=csv_reader[0]
	
OUT=i

I think the file opening and reading has to be inside the while loop because i want to do that again and again until the condition is met. For that i want to overwrite the string i everytime. But there must be something wrong because the code does nothing at all. So it seems i dont get the while concept yet^^

But the isolated import works:

Maybe the with/as command is not working inside the loop? Or the string overwriting does not work? Please enlighten me.

Current Status:

Ohhh i figured it out :smiley:

The test condition has to be true and the loop stops as soon as it is false.
So my condition must be !=bananas so it will be false as soon as it is bananas.

So it really works as soon as i change the name in the csv file and save it :smiley:

[video-to-gif output image]

Code works, was just a testing issue.
So here is the test code with sleep and timeout commands.

import time
timeout = time.time() + 60 #seconds

path = "C:\\Users\\GEPA\\Desktop\\fruit.csv"

i = "timeout"
while i!="banana":
	time.sleep(0.25) # sleep for 250 milliseconds
	if time.time() > timeout:
		break
	with open(path) as csv_file:
		csv_reader = csv_file.readlines()	
	i=csv_reader[0]
	
OUT=i

And here is the working code for while looping the sync status:

import io
import time

toggle = IN[0]

if toggle == True:

	timeout = time.time() + 10 #seconds
	while bool!=True:
		time.sleep(0.25) # sleep for 250 milliseconds
		if time.time() > timeout:
			OUT="break"
			break
		
		filePath = IN[1][0]
		file = io.open(filePath, "r", encoding = "utf-16(LE)")
		string = file.read()

		STC_lastindex = string.rfind("STC")
		STC_substring = string[STC_lastindex-1:STC_lastindex+3]
		bool = STC_substring == "<STC"
	
		OUT = bool
else:
	OUT = []

I´m now ready for testing:

First test with 2 users if one user is currently Syncing:

So in Dynamo everything seems fine!
But let´s take a look at the SLOG:

$7570c936 2022-10-09 11:35:02.982 >STC

$b22761d0 2022-10-09 11:35:09.712 >STC

$7570c936 2022-10-09 11:35:25.827 <STC

$b22761d0 2022-10-09 11:35:58.079 <STC

Hmm, this looks like the sync of user 2 started immediatly after running the script. So the while loop did not work, there was no wayting for user 1 to finish. Strange.

And another problem: The SLOG is sometimes locked and so i can not read the file with python. So i think this is also a case for a while loop? read file until no error?!

" The process cannot access the file because another process has locked a portion of the file"
In that case the file.read is already in the while loop…

Edit:

So with the help of tuneUp i could verify that user 2 gets sync permission after 5 seconds, but it should be after 35 seconds when user 1 has finished. So the problem is my “loop Sync status” that get´s triggered 30 seconds too soon.

So the problem was just that i searched for “<STC” instead of “>STC\n” and so i got the wrong entry.

The mentioned file access error should is because of my opening/closing method. This should now be avoided by changing to the following method that will close a filoe automatically:

with io.open(filePath,"r", encoding = "utf-16(LE)") as file:	
	string = file.read()

And now everything is working :smiley:

$c663932c 2022-10-09 14:20:32.026 >STC
$c663932c 2022-10-09 14:20:50.346 <STC

$004c26dc 2022-10-09 14:20:51.323 >STC
$004c26dc 2022-10-09 14:21:15.373 <STC

So NOW we can talk about time!

Managed Synchronizing:
While user 1 syncs in 20 seconds without an interruption, user 2 is queued 15 seconds and can then sync in 25 seconds.

So is this any good? Lets take a look what happens if both users sync simultaneously:

Simultaneous:
User 1 starts syncing and boom user 2 is interrupting him what gives a total of 50 seconds!
User 2 is lucky, he´s cutting in and finished after only 22 seconds!

So now I can imagine what is going on if more users are syncing simultaneus and now i see how useful this managed synchronizig even is. So I´m really happy with this results.
So this is really interresting and i will probably make tests with more users!

Simultaneous sync of 3 user:

User 1: 61s
User 2: 88s
User 3: 21s

So again the last syncer is the lucky one.

4 Users:

User 1: 109s
User 2: 54s
User 3: 20s
User 4: 78s

The simultaneuous syncing seems to follow one simple rule: one user is always the loser :smiley:
So i can´t agree with you @jacob.small that a simultaneous syncing is no problem. From this test and from the daily struggle of my coworkers. And the test shows runtime muliplied by over 5 times even so i synced all users before the test.

1 Like

So it’s not that simultaneous syncing isn’t a problem, but that it’s less frequent and less impactful then infrequent syncs.

Infrequent syncs will slow everyone down, and significantly exaggerate the impact of concurrent syncs exponentially. As such I always advise people to put the effort into where they get the most bang for their buck, and focus on getting better user behavior first. As far as why infrequent syncs is the big problem…

When I’ve been called into cases where there are extreme sync times, i usually I see numbers upwards of 10k in one or both directions. This happens in situations where it’s been 4 hours and users 1 hasn’t synced, but user 2 has been syncing every 15 minutes, and imported 8 details views since user 1 last synced. That means that user 1 has to add all of user 2’s stuff to his local model, and those have to be reconciled against any of his changes before he can write back to central, The data user 1 has to add includes: the new detail views, all the elements in the views, new element types and families, tags in each view referencing the details, the sheet the views are placed on, the viewport for each view, the parameters for everything… The list gets VERY big, very quick. Doubly so when working with model elements which children in them.

Considering the scope of some of the data and the effort to reconcile everything, it’s still extremely fast all things considered. But it’s still not as fast for user 1 as it is user 2, and user 1 feels it should be faster because they only have a handful of changes. Had they synced every 15 minutes like user 2, they’d have less to reconcile at once, and the total time spent in a sync state is decreased as a result.

The problem is similar when you reverse the roles for user 1 and user 2. However the problem doesn’t really get bad until you have both user 1 and user 2 adding a lot of content (common in practice), and only syncing once a half day or so, causing user 3+ significant wait time (leading to more concurrent syncs too!).

2 Likes

Thanks for that detailed explanation. As said before i have zero experience with syncing problems, but the coworkers keep complaining and they use WSM+Teams Chat to coordinate syncs, not for all projects but then when it´s neccessary. So why not solve both problems, syncin frequenzy and simultaneous syncing!

Hello Jacob,

I thought about the different syncing options and now I think that queued syncing and simultaneous syncin will lead to the same overall time amount, just the time split for each user will be different.
The biggest problem for my method is, that you can not work if you get queued in, because of dynamo?! I think with a plugin it would be possible to “while loop” in the background?

But before we continue with talking about syncing theories, i would like to finish my code, but I´m stuck.

I´m now going the full python way for the first time!
For now I´m only missing the while loop and the writing to the syncLog, it will be no problem to add that.
But for some reason I´m getting into an endless syncing loop! Even there is no while loop yet.

I think it might by my definition and the way of calling it, i never use a definition before:

def Sync():
	tOptions = TransactWithCentralOptions()
	rOptions = RelinquishOptions(False)
	rOptions.StandardWorksets = True
	rOptions.ViewWorksets = True
	rOptions.FamilyWorksets = True
	rOptions.UserWorksets = True
	rOptions.CheckedOutElements = True
	sOptions = SynchronizeWithCentralOptions()
	sOptions.SetRelinquishOptions(rOptions)
	sOptions.Compact = False
	sOptions.SaveLocalBefore = True
	sOptions.SaveLocalAfter = True
	sOptions.Comment = ""
	TransactionManager.Instance.ForceCloseTransaction()

	doc.SynchronizeWithCentral(tOptions, sOptions)

for Line in SLOG:
	if "STC" in Line:
		if Line[-4:] == ">STC":
			# while loop will be added here
			OUT=""
		elif Line[-4:] == "<STC":
			Sync()
			OUT = "Instant Sync successfull"
		else:
			OUT = "fail"

Am i doing something wrong here?
Here is the full code:

import io
import clr
clr.AddReference("RevitAPI")
import Autodesk
from Autodesk.Revit.DB import *

clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager

doc = DocumentManager.Instance.CurrentDBDocument

def Sync():
	tOptions = TransactWithCentralOptions()
	rOptions = RelinquishOptions(False)
	rOptions.StandardWorksets = True
	rOptions.ViewWorksets = True
	rOptions.FamilyWorksets = True
	rOptions.UserWorksets = True
	rOptions.CheckedOutElements = True
	sOptions = SynchronizeWithCentralOptions()
	sOptions.SetRelinquishOptions(rOptions)
	sOptions.Compact = False
	sOptions.SaveLocalBefore = True
	sOptions.SaveLocalAfter = True
	sOptions.Comment = ""
	TransactionManager.Instance.ForceCloseTransaction()

	doc.SynchronizeWithCentral(tOptions, sOptions)

CentralFilePath_dfs = BasicFileInfo.Extract(doc.PathName).CentralPath
CentralFilePath = "P:\\"+(CentralFilePath_dfs.split("\\dfs\\")[1])

FileNameList = CentralFilePath.Split("\\")
FileNameListLength = len(FileNameList)
FileName = FileNameList [FileNameListLength-1]
SlogFileName = FileName.Replace("rvt","slog")

ProjectList = FileName.Split("_")
Projectnumber = ProjectList[0]


SlogFileFolder = CentralFilePath.Replace(".rvt","_backup")
SlogFile = SlogFileFolder+"\\"+SlogFileName
LogFile = "......."

with io.open(SlogFile,"r", encoding = "utf-16(LE)") as file:	
	SLOG = file.read().splitlines()
	
with io.open(LogFile,"r", encoding = "utf-8") as Logfile:	
	LOG = Logfile.read().splitlines()

for Line in SLOG:
	if Line[-4:] == ">STC":
		LastStartSession = Line[:9]
		
SLOG.reverse()	

for Log in LOG:
	if Projectnumber in Log:
		LastItem = Log
		LastUser = LastItem.split("\t")[1]
		for i in SLOG:
			if "user=" and LastUser in i:
				Index = SLOG.index(i)
waitforsession = SLOG[Index+1][:9]

for Line in SLOG:
	if LastStartSession in Line:
		SessionIndex=SLOG.index(Line)

user = SLOG[SessionIndex-1].split('"')[1::2][0]

SLOG.reverse()

for Line in SLOG:
	if "STC" in Line:
		if Line[-4:] == ">STC":
			# while loop will be added here
			OUT=""
		elif Line[-4:] == "<STC":
			Sync()
			OUT = "Instant Sync successfull"
		else:
			OUT = "fail"