Looking to build a new computer for Mastercam 2020/Camplete - Page 3
Close
Login to Your Account
Page 3 of 3 FirstFirst 123
Results 41 to 53 of 53
  1. #41
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by Mtndew View Post
    That hasn't been my experience to be honest. 2018,19,20 and even 2021(beta) seem to all run the same performance-wise in fact I would say that 2020 is better than previous versions when calculating toolpaths.

    One thing that some people overlook and can affect performance is Windows. It can/does get cluttered up over time and if you have the time or patience, you can do a windows refresh or even a full clean install of Windows. I've done this a couple of times and the difference is night and day.
    Ide be into a fresh install, I dont use my computer for anything other than mastercam, I certainly won't be losing anything.

    As for performance by year, you might be absolutely right, I know when they switched the interface, (2016-2017 maybe) I noticed a huge decrease in performance. Perhaps the rest of the years it's just been my system getting slower.

  2. #42
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,551
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1203

    Default

    Quote Originally Posted by gregormarwick View Post
    Generally when a post contains this much nonsense I only respond to it if there's a chance of any of it getting disseminated as fact. It honestly reads like your knowledge of computers ends abruptly at around the year 2k and everything else is just invention.
    Everyone is entitled to their own opinion. I don't drink the kool-aid, you like it. That's fine.

    But I will mention that HP Enterprise just spent a couple hundred million licensing Silicon Graphics "International" numalink machines so they could sell them into the enterprise hpc market. It is unfortunate their guys did not know that all they really had to do was stick a bunch of xeons into a box and use these modern api's. Maybe they could have Siri tell the compiler to optimize their code and voila ! Out the other end, the unicorn !

    I also found this sale by "Silicon Graphics International" interesting SGI went bankrupt (for the second time) in 2006. They shit on all the "stakeholders", also their suppliers and customers. Miraculously, ten years later some hidden remnant comes out of the closet with "intellectual property" worth hundreds of millions to sell.

    That's the software industry we all know and love. Crooks.

    Very similar to your claims about APT in the other thread.
    APT works. It does five axis, as demonstrated by many of the airplanes flying our skies - tho not the 737 Max. It is public domain. It was the grandparent for NX. One version I have cost $300. I programmed hundreds of parts with it.

    Do you have a problem with that ?

    Let's leave the world of imagination and make this concrete. For one example, I did all the LmaRR Discs (all components and the dies except I didn't do the spinning, some models as few as four or five parts) for decades in APT. Nothing special, but $300 vs $15,000+ plus maintenance, what's that over ten or fifteen years ? okay, I'm an idiot. I shoulda bought modrun software ! It would have been so much better ! I coulda saved fifteen minutes !


    Apple, contrary to what you claim, have probably the best implementation of a thread dispatcher in any current mainstream OS in the form of Grand Central Dispatch, which is crazy efficient when software is written to properly utilise it.
    And this is why, when you put a CD with butchered names into the newest latest Apple airweight supercoolifragilistic expialadocious laptop, the arfing desktop locks up for five minutes. In fact, the arfing desktop locks up way too frequently for other undiscovered causes. It's infuriating.

    What you say is/may be true in principle. But on my planet, it has not worked out in practice.

    Just out of curiosity, do you ever build software ? Without using the "latest greatest" version of gcc and all that crap ? My experience has been that 20% of the time it is great, the software is well written and thoroughly tested. Maybe 40 or 50% of the time it is okay, you can make it work. And a full 30 or 40% of the time it is absolute shit.


    Regarding ethernet/numalink - Distributed compute does not treat ethernet as a CPU bus obviously... Some data to be processed is bundled up in a packet and sent wholesale to the remote computer where a resident process works on it and sends the results back. This is not a new thing by any stretch so idk why you think this is weird.
    What I thought was weird is that Mastercam, which is NOT that computationally intense, would be benefitted by this when, as you say, modern multi-core cpus and that incredibly fast memory system should spit it out wham bam thankyou ma'am in seconds on the local workstation. That was what I thought was weird. Mastercam is not computing weather forecasts ... (and even the latest hot-shit European weather forecasting machine is once again ssi, not distributed.)

    waterline/z level also where each vertical pass can be processed independently and link moves considered later.
    This is an interesting idea, and if true the op should see that the first level of his cl calculation would take a long time but all the rest would already be done at the same time the first was finished.

    Is this what happens ?

    I generally agree that cad/cam developers have been slow to implement real multithreading, but the fact is they all do to some degree nowadays. No reason to cast aspersions towards op's observations, what he stated is perfectly possible.
    No aspersions were cast. I do not think his test was accurate though. Most people are not aware that in a multi-tasking os there are a lot of other things going on that have nothing to do with the program being tested. So, as an extreme example, was Windows doing an update while he looked at the cpu-meter ? An anti-virus program running a scan ? Several of those Windows "services" doing their thing ?

    It is actually pretty difficult to shut all that down for a real test. Most people don't know that, it's not an "aspersion" to doubt what he saw is pertinent to the "how many threads does Mastercam use ?" question. There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.

    Sorry, Brian T, no aspersions intended But partially because of the reasons Gregor says I am full of shit, I can't understand why your Mastercam is so slow. It makes no sense. That program ran fine on much older hardware. We've all seen that. That's why I'd be interested to see what you are doing, because people have made stuff in Mastercam for decades now, without wondering whether their i7 was actually a 286

  3. Likes empwoer liked this post
  4. #43
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by EmanuelGoldstein View Post

    It is actually pretty difficult to shut all that down for a real test. Most people don't know that, it's not an "aspersion" to doubt what he saw is pertinent to the "how many threads does Mastercam use ?" question. There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.

    Sorry, Brian T, no aspersions intended But partially because of the reasons Gregor says I am full of shit, I can't understand why your Mastercam is so slow. It makes no sense. That program ran fine on much older hardware. We've all seen that. That's why I'd be interested to see what you are doing, because people have made stuff in Mastercam for decades now, without wondering whether their i7 was actually a 286
    No problem at all! I've said all along (and I think this thread has proven my point) that I don't actually know what I'm talking about. Furthermore, I actually haven't been able to replicate my original test, which is why I haven't posted a screenshot. It looks like what actually happens is they all spike for a second, then all but 2 drop down. Perhaps I regenerated a rest milling op or something that works with other toolpaths in the background.

    Also perhaps my computer isn't as slow as I think, however, it seems to me I still should be able to throw some money at it to speed it up.

  5. #44
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Everyone is entitled to their own opinion. I don't drink the kool-aid, you like it. That's fine.
    I really don't want to get drawn into internet pissing match, not least because we're going wildly off-topic and arguing about things that nobody else ITT cares about...



    Quote Originally Posted by EmanuelGoldstein View Post
    ...numalink...
    You brought it up for some reason when we were talking about distributed compute. Numalink is what it is - HP need it for their mainframes. So what? It has nothing whatsoever to do with anything that we're discussing...

    Quote Originally Posted by EmanuelGoldstein View Post
    ...APT...
    APT is a different argument. I used it to highlight your propensity for making false equivalence arguments.

    Quote Originally Posted by EmanuelGoldstein View Post
    ...newest latest Apple airweight supercoolifragilistic expialadocious laptop...
    I use OSX for about half of everything that I use a computer for. It's rock solid in my experience.

    Quote Originally Posted by EmanuelGoldstein View Post
    Just out of curiosity, do you ever build software ? Without using the "latest greatest" version of gcc and all that crap ? My experience has been that 20% of the time it is great, the software is well written and thoroughly tested. Maybe 40 or 50% of the time it is okay, you can make it work. And a full 30 or 40% of the time it is absolute shit.
    Actually, yes to the first question. To the second question, kind of - I started out programming when I was a teenager in the 90s, so I have experience with older tools and methods and different platforms, but I don't maintain old code or anything like that - only work with current toolsets.

    We can definitely agree that these days the general quality of commercial software is pretty bad. There are a lot of very low wages in that space, with a lot of inexperienced and poorly educated people writing code and making design decisions, and everyone is moving to a rolling release model so they don't have to spend money on QA, and just let the early adopters take the hit coughmicrosoftcough

    Quote Originally Posted by EmanuelGoldstein View Post
    What I thought was weird is that Mastercam, which is NOT that computationally intense, would be benefitted by this when, as you say, modern multi-core cpus and that incredibly fast memory system should spit it out wham bam thankyou ma'am in seconds on the local workstation. That was what I thought was weird. Mastercam is not computing weather forecasts
    Yes, it should be faster. Given the raw power of modern cpus, everything could be much faster. But you'll go a long way these days to find someone who can rework a function in assembly. Optimising takes time, low level programming takes time. Development is much faster, relatively, because of high level languages and abstracted apis built layer on layer on top of each other, but it's computationally expensive.

    On top of that, it's important to understand how much work a modern cam system is actually doing compared to what they used to do.

    Collision detection and avoidance, continuous tool vector optimisation, dynamic tool load and engagement normalisation, dynamic path filtering, machine dynamics optimisation etc. etc.

    Quote Originally Posted by EmanuelGoldstein View Post
    No aspersions were cast. I do not think his test was accurate though. Most people are not aware that in a multi-tasking os there are a lot of other things going on that have nothing to do with the program being tested. So, as an extreme example, was Windows doing an update while he looked at the cpu-meter ? An anti-virus program running a scan ? Several of those Windows "services" doing their thing ?
    Typically on something like an i7 on windows, backgrounds processes will use negligible cpu time. A virus scan for example will always be io bottlenecked, and might put 20% load on one core. On such a typical system, if all the cores are pegged it is, 999 times out of 1000, because of what you are doing in the foreground.

    If OP witnessed all eight threads fully loaded while doing something in Mastercam, then unless he was raytracing something or rendering a video for youtube in the background, it was Mastercam.

    Quote Originally Posted by EmanuelGoldstein View Post
    There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.
    You're talking about a profiler, and generally they're not useful outside of the development environment because they inject hooks into the binary at compile time in order to work. On windows, a typical profiler will report the number of threads, but will not tell you anything about their core affinity, as that is determined dynamically by the scheduler.

    Perfmon.exe is the simplest way to get hard data on this as it will let you log the cpu time used by a specific process as a percentage of total cpu time.

  6. #45
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,551
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1203

    Default

    Quote Originally Posted by gregormarwick View Post
    We can definitely agree that these days the general quality of commercial software is pretty bad.
    For two people that agree, we sure managed to turn this into an argument

    I do agree with you that hardware is much better. I just don't think that commercial software has improved for the last ten or fifteen years. A bugfix here and a bugfix there but real improvements ? I bet in a blind test people couldn't tell the difference between XYZApp 2006 and XYZ 2019. Or maybe they would prefer the old one !

  7. Likes gregormarwick liked this post
  8. #46
    Join Date
    Jun 2013
    Country
    UNITED STATES
    State/Province
    Washington
    Posts
    85
    Post Thanks / Like
    Likes (Given)
    2
    Likes (Received)
    15

    Default

    The configuration you have is not exactly wimpy! Simply increasing number of cores and speed is unlikely to get you even close to an order of magnitude jump in performance. I suspect it comes down to how well Mastercam load balances across multiple cores.
    Most moderns cpu's are already 64 bit but that doesn't buy you anything unless your program is compiled for 64 bit.

    Some serious conversation with Mastercam might help but I wouldn't hold my breath. Good luck.

  9. #47
    Join Date
    Sep 2013
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    299
    Post Thanks / Like
    Likes (Given)
    96
    Likes (Received)
    64

    Default

    A bit late to the party...

    Not too sure about MC but many cad/cam/cae software is floating point calculation intensive. Old amd cpu's were really bad for this because they used to share a FP processor between two integer cores but I have no idea what ryzens are doing. It seem like they keep this info somewhat hard to find for the user.

    The big advantage to Xeon is multiple socket support and ecc ram support. Not sure if ryzen supports those...

    Windows home version used to not support multiple sockets but 10 Home may be different and I think MS opened up Pro as well; used to support only two sockets but now may be more. Make sure to have hyperthreading enabled for MC but check on other applications it's very specific.

    There seems to be a lot of discussion surrounding core speed with regard to number of cores and single threaded/multi-threaded. There is still a lot of both threading types in various cam software. You have to decide for yourself on a compromise between 4 fast cores and 6 slightly slower cores or 8 (and more) cores that are even slower. There are some Xeons with a high core count (48 core?) that are relatively fast but the price is high as well. For a starting point it's a good idea to monitor your cpu load for all your common tasks to see which ones are single threaded and which are multi-threaded. If you go for a 4-core cpu then your multi-threaded performance will suffer. My ideal setup would a fast 4-core and then a decent 12-core on a second socket but we all know that isn't supported. My main workstation is a decently fast single 6-core but a build I am going to do for contract work will be either two of these or two fast 4-core cpu's; still haven't decided...currently I see NX crunching hard for some single threaded processes but there are many multi-threaded processes when I watch all six cores crunch for a while, such as in a simulation with the resolution cranked up a bit.

    As for SSD's I would go with a decent M.2 setup. Definitely skip sata3 because it's old school and slow and I would skip pci ssd unless you find one at a decent price point. You could do raid but I wouldn't bother....your big concern should be the CPU.. You probably have enough ram although check your usage. 64GB is the new 32, lol. Your gpu performance will mainly be for redraws, rotations and perhaps high-end renderings if that's your thing. If you find your graphics is really lacking then perhaps look at a faster card and if you overwhelm a single card then you might consider running multiple cards with SLi.

    Too bad cam software isn't more like grid computing. On a test rig I had three gtx 1080's crunching grid work units for protein folding and it totally rocked back when those were top end cards. The neighbor kid thought it was a huge waste because I didn't game with it LOL.

  10. #48
    Join Date
    May 2017
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    1,087
    Post Thanks / Like
    Likes (Given)
    1340
    Likes (Received)
    733

    Default

    Quote Originally Posted by Qwan View Post
    For a starting point it's a good idea to monitor your cpu load for all your common tasks to see which ones are single threaded and which are multi-threaded.
    Yes, but also pay attention to what tasks you end up waiting for the most in your daily workflow. If you have a lot of multi-threaded tasks that you wait a couple seconds for and a few single threaded tasks that you wait minutes for, you're going to save a lot more time overall by streamlining those single threaded tasks even if it makes you wait a little longer for the fast tasks. Also pay attention to how many threads a multi-threaded task is capable of using. It may by capable of using four threads but no more, in which case a 16 thread CPU is still wasting most of its capacity.

  11. #49
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,551
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1203

    Default

    Don't know if you guys noticed this but the eight-cores-busy observation was apparently a fluke, seems that Mcam is keeping two, count 'em two, cores busy.

    I guess the 62 other cores could sort his recipe folder in the background ...

    If there's a top for Windows, it would give at least a little bit of an idea what's going on ... a *little* bit, he said ...

    Quote Originally Posted by top
    Processes: 62 total, 2 running, 60 sleeping, 299 threads 17:18:04
    Load Avg: 0.27, 0.29, 0.30 CPU usage: 1.39% user, 3.72% sys, 94.88% idle

    PID COMMAND %CPU TIME #TH #WQ #POR #MRE RPRVT RSHRD RSIZE VPRVT
    1502 distnoted 0.0 00:00.01 2 1 39 47 396K 240K 1016K 30M
    1501 mdworker 0.0 00:00.17 4 2 55 73 1208K 8604K 5448K 23M
    1498 launchd 0.0 00:00.02 2 0 53 45 376K 416K 792K 38M
    1497 top 6.2 00:17.48 1/1 0 31 29 936K 216K 1644K 17M
    1494 bash 0.0 00:00.02 1 0 20 23 392K 216K 1156K 17M
    1493 login 0.0 00:00.06 2 1 33 57 720K 216K 2028K 30M
    1491 Terminal 0.6 00:04.81 5 1 121 176 5608K 15M 18M 22M
    693 firefox 0.0 08:36.88 42 2 225 938 161M 25M 256M 272M
    624 AppleSpell 0.0 00:00.14 2 1 49 46 872K 8628K 3140K 30M
    617 filecoordina 0.0 00:00.03 2 2 35 40 504K 216K 1752K 22M
    614 Skim 0.0 00:09.80 2 1 113 248 7868K 23M 21M 21M
    203 netbiosd 0.0 00:02.92 3 3 46 57 704K 328K 2248K 41M
    200 mdworker 0.0 00:31.01 4 1 55 100 3780K 8896K 15M 23M
    172 Tunnelblick 0.0 00:10.99 3 1 160 181 8608K 13M 21M 38M
    168 iTunesHelper 0.0 00:00.15 3 1 56 75 1036K 944K 3548K 24M
    167 TISwitcher 0.0 00:00.18 2 1 80 79 1252K 7404K 5780K 32M
    162- Little Snitc 0.0 02:43.92 4 1 128 177 4192K 14M 10M 33M
    161- Little Snitc 0.0 00:01.45 3 1 117 168 2612K 16M 9528K 24M
    156 imagent 0.0 00:00.34 2 1 57 69 1308K 4676K 3652K 30M
    151 warmd_agent 0.0 00:00.02 2 2 35 50 476K 220K 1736K 23M
    147 com.apple.do 0.0 00:00.37 2 1 95 111 2856K 6824K 11M 33M
    145 fontd 0.0 00:01.31 2 1 74 101 2204K 4884K 4132K 22M
    This is with nothing happening ... you can run it while you are doing something and get a small feel for what is taking up processor time. It's a very basic tool but better than nothing.

    edit: maybe start with this

    Using the Get-Process Cmdlet | Microsoft Docs

  12. #50
    Join Date
    Sep 2011
    Country
    UNITED STATES
    State/Province
    Virginia
    Posts
    27,662
    Post Thanks / Like
    Likes (Given)
    7561
    Likes (Received)
    8611

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Don't know if you guys noticed this but the eight-cores-busy observation was apparently a fluke, seems that Mcam is keeping two, count 'em two, cores busy.

    I guess the 62 other cores could sort his recipe folder in the background ...
    Well that IS the issue with WinWOES. Outrageous overheads totally unrelated to a "Line of Business" or "Mission Critical" tasking that a person is happy to dedicate good hardware to.

    And then.. go handle email, You Tube, browse food sites, book hotels, pay bills, etc. on some other box ... or laptop.

    Even a hard-core BSD'er is well aware that WinWOES equivalent to a "kernel" is actually right competent, and even robust enough to run 24 X 7 for more than a year between fan cleaning, HDD, or MB upgrades.

    I had turned all that nonsense over to a partner by around 2004, our last joint task a stripped WinServer 2003 image run in QEMU on an OpenBSD host.

    Sean, downunder, had kick-started the stripdown biz with "98-Lite" "Win-NT Lite", and Win-2K Lite. We started there, then went well beyond that.

    SOMEBODY, or SEVERAL somebodys, who have kept up over the nearly 20 years since, will have even better tools by this late date to craft uber-streamlined Win "Mission Critical" line-of-business "bare metal" or virtualizer "images", either one, or BOTH.

    The stability, security, and performance gains as can be had are borderline MAGICAL.

    IF THEN ALSO you throw good value-for-money (AMD "at the moment") hardware at it as well?

    Problem solved. Bigtime. Longtime.

    "Job One" is to find out who those wizards are, present day, and see what can be done with a stripped for bespoke purpose Win.

    NB: They do not NEED much in the way of UPDATES. My modified Quickbooks accounting QEMU image still works. NT4, Service Pack 3. Easy peasy any modern CPU WinTel, Power, AMD, VIA. It ran "OK" on a 50 MHz Cyrix, then a 900 MHz VIA C3, then G4 Mac, then... anything, really.

    So it isn't about the cost of the hardware.

    That can pay back first three months or even sooner, you are being PAID for product, not hobby.

    It is about the TIME saved, going forward off leaner, faster, more stable, and more easily kept secure use of no more SOFTWARE than what your Applicaiton actually NEEDS from the massive warehouse of all things to all possible users Windows version of Walmart + + the boneyard of obsoletes at Luke AFB.

    Lightest touch of a stripdown removed seventy THOUSAND files we didn't ever use - 20+ years ago. Active processes? We dropped over a hundred. Minimum.

    And then we got SERIOUS about it and most of the bugs and their security holes went away as well.

    No need to DIY.

    As Howard Hughes said to Noah Dietrich:

    "Find the experts!"

    I am not they. MY life has always been too DAMNED short to truck with WinWOES at all, MSDOS 1.X onward. MS BASIC in ROM, the LAST MS product I used, rather than simply evaluated or supported for others.

    But exist the experts surely do.

    Most likely not machinists, but there you have it.

    Have to look in the right places, yah?


  13. #51
    Join Date
    Jun 2012
    Location
    Michigan
    Posts
    4,733
    Post Thanks / Like
    Likes (Given)
    4339
    Likes (Received)
    2878

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Don't know if you guys noticed this but the eight-cores-busy observation was apparently a fluke, seems that Mcam is keeping two, count 'em two, cores busy.
    Like I said back on page 1, Mastercam likes clock speed not multiple cores but it turned into a bigger dick contest between you guys.

  14. Likes mhajicek liked this post
  15. #52
    Join Date
    Sep 2011
    Country
    UNITED STATES
    State/Province
    Virginia
    Posts
    27,662
    Post Thanks / Like
    Likes (Given)
    7561
    Likes (Received)
    8611

    Default

    Quote Originally Posted by Mtndew View Post
    Like I said back on page 1, Mastercam likes clock speed not multiple cores but it turned into a bigger dick contest between you guys.
    Noooo. SMALLER "dick" if you can give it what the critical app NEEDS with less jumping through its own ass, even to Ring 0 and lesser and/or "thrashing" its I/O.

    Time was, the fastest HDD we could cram into a WinBox was a pair of TCNS fiber optic NIC.. ...to an industrial strength duplexed-controller CDC RAID array on the server at the other end of the glass! Thomas-Conrad's version of ARCnet protocol, separated, was scary efficient compared to Metcalf's folly, "regardless". "HPPI" was another one.

    There's ALWAYS room for improvement, and more often with brains than money.


  16. Likes Mtndew liked this post
  17. #53
    Join Date
    Sep 2013
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    299
    Post Thanks / Like
    Likes (Given)
    96
    Likes (Received)
    64

    Default

    Quote Originally Posted by mhajicek View Post
    Yes, but also pay attention to what tasks you end up waiting for the most in your daily workflow...
    that is implied with the discussion. Nobody should throw resources at an issue which does not exist.


Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •