Overclocking my Maho 600E2
Close
Login to Your Account
Results 1 to 13 of 13
  1. #1
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Cool Overclocking my Maho 600E2

    Background and upgrade speculation at maho24.at

    Video description:
    Tested cycles:
    - Horizontal with mostly G3 movements
    - Spiral with concentric circles (both G3's and G1's), Tolerance 0.01mm
    - Ramp, Tolerance 0.01mm
    - Bore

    Original CPU: 386DX-16 + 387DX FPU, running at 16MHz / 32MHz oscillator
    Original problem:
    - Data runout of BTR buffer during BTR/DNC transfer while multiple small G1 moves because of reducing transfer speed
    - Stopping the movement for waiting more data
    - "Movement jerking" when feed rate is too big.

    Tested hardware: TI486DLC-33 +Math Corp ULSI FPU, running at 20MHz / 40MHz oscillator. Oscillator is installed in a DIL14 socket for easy replacement

    Findings:
    - CPU failure at startup diagnostics which seemengly can be ignored, but needs manual action
    - No more BTR buffer under run, data transfer keeps ahead all times
    - Movements get slower while small G1 movements, but no serious jerking!
    - The "stopping movement" happens when concentric spiral enters next Z level with smooth transition
    - Difference in speed between small G1's and G3's is obvious.
    - Tested with feed override, no effect in data transfer even at 1120mm/min!

    To be done: Replacing the A82380-16 DMA controller with A82380-25 and further testing with a 50MHz oscillator



  2. Likes HBOB liked this post
  3. #2
    Join Date
    Sep 2002
    Location
    People's Republic
    Posts
    5,970
    Post Thanks / Like
    Likes (Given)
    715
    Likes (Received)
    3713

    Default

    Fun

    Still 30 year old hardware

    I wonder if the delays could be minimized by working on the transfer end

  4. #3
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    You mean the delays in movement control / bus?

  5. #4
    Join Date
    Dec 2011
    Location
    Ohio-USA
    Posts
    36
    Post Thanks / Like
    Likes (Given)
    11
    Likes (Received)
    1

    Default

    Hope to see a lot more of your endeavor .
    Maybe more in depth for those who would want to do as you have .
    Anyway thanks for showing this .
    Bob

  6. #5
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    Thanks for the interest!

    This is a Maho 600E2 with a 432/10 control that has a 386 CPU running at 16MHz. There are other versions, too, that use 8088 and 286 CPU's. Latter could be upgraded with a Harris made 286 that can handle 25MHz/nominal, and the 8088 could be replaced with a NEC V20, V20HL or V30 that all are faster straight out of the box AND can be clocked up to 16MHz (not familiar with all of them). I don't know however if any of these has the same timing and data access principles as the 386 unit has. And I advice to proceed with caution as these modules don't actually grow on trees.

    ---

    At first I replaced the original 386 and 387 FPU (option) with TI486DLC-33 and ULSI FPU which both are more efficient than their original counterparts. As a matter of fact, I noticed a difference in transfer speed getting a bit better / BTR buffer underrun happening later than normal, so I decided to look for if it is possible to overclock the thing.

    Final decision happened after I spent several days inspecting the CPU modules circuit and block diagram that has all of the signal paths marked. I noticed earlier that the memory used is static ram (no clock involved in memory timing) and later with the documentation that memory operations, CPU ready logic and general data timing are based on the CPU clock and do not use hardware timing / wait states. There is only one chip (PAL20L8)that is used to tell the DMA controller that if the slow EPROM's are being used, and the wait state signals (2pcs) guide which of the programmed wait state registers should the DMA controller use. It's an A82380 chip, btw and don't have a clue of the part of the OS that controls it, or if it's the boot PROM.

    Control bus has it's own oscillator that runs on 24MHz and by a counter circuit it is reduced to 1.5 and 3.0MHz for control bus. Hence control bus timing is independent from the CPU / memory area, but of course the DMA has wait states for control bus data as it's so darn slow.

    Control bus is the real bottle neck, as can be seen from the video, but its meaning was reduced as the CPU and memory are ahead at every time and has "tons" more computing power. Now I'm wondering if it would be possible to raise the bus frequency without signal processing errors, and thus, the speeds should be doubled, I think. 1.5->3MHz and 3->6MHz, so probably wouldn't happen without hitches and hick ups.

    ---

    What I did, was simply to remove the 32MHz crystal oscillator (care must be taken as this is a multi layer PCB), soldered in a DIL14/4 socket and put in a 40MHz crystal oscillator. It boots up, startup configuration check shows a failed CPU and needs to be manually acknowledged by clicking F3 (soft key, MANUAL reads on the screen), but after that everything seems to work normally.

    See pic of the PCB layout below. There are two crystal oscillators in the upper area. The one to the right from the A82380 chip is the one that gives timing for data processing (CPU, FPU, DMA, Ready logic) and the other is the one that drives the control bus.

    386p.jpg

  7. #6
    Join Date
    May 2001
    Location
    Massachusetts
    Posts
    1,281
    Post Thanks / Like
    Likes (Given)
    49
    Likes (Received)
    237

    Default

    I really liked the 432 control when I had one. For a control built in 1990 (mine) it was super straight forward to use and great for prototyping. I like it far better than the 2011 Fanuc I'm currently running.

  8. #7
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    I have a legacy 15TF Fanuc in my Takisawa lathe, and it's quite horrible in comparison to 432. For example tool length data isn't changed in tool change but only after first move command. Many "live" / environmental things are required to be checked from cryptic machine parameters etc. In 432, you have exerything essential there on the screen, like active G and M-codes, feed and speed overrides etc. It also has the FAPT programming, which cab't be described as intuitive

  9. #8
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    Status report:
    I received the A82380-25's I ordered. So...

    I made three tests:

    1: Install the new DMA controller and replace the 40MHz oscillator with a 50MHz one, hence raising the processor speed from 20 to 25MHz.

    The machine booted, but when I was transmitting the parameters it gave me a memory parity error and as I tried again, it gave an INT 13 error. So 25MHz is too much. I asked my brain-friend if changing the faster memory chips would help, so this needs further investigation.

    I ran two test runs with some random program I had in "contour" folder, have no idea what it is, but it has some tiny G1's and with 140% feed override it runs at 980mm/min. Not sure if it's true as I have no idea of the model was used to make the program so I can't check the calculated machining time in Fusion360.

    Test conditions:
    MC93 / BTR memory 80kb
    BTR ON
    Start time at starting the file transfer
    Cycle start when 1000 lines is transferred
    End time at M5
    "Jerking index" is determined by visually observing the right side splash guard "wobbling"

    2: Test run at 20MHz / TI486DLC
    Time: 11min 45sec
    Data transfer ahead at all times
    Low to moderate "jerking index"

    3: Test run at 16MHz / TI486DLC
    Time: 13min 00sec
    Buffer runs out approx after 950 lines of code due to data starvation and after that every 900-1000 executed lines.
    moderate "jerking index" / not as bad as I thought

    - I will make some random design in Fusion and create various tool paths with different feed rates and we can use it as a test standard when determining the machine capabilities and actual speeds with different CPU's and possible upgrades.

    - I will also try to make an easy and cheap to reproduce "jerk o' meter" so we can get some "index numbers" with using the standard test file.
    - I will then make test run also with the stock 386DX-16 and 387DX-16 CPU's

    There is a more active thread at Maho24.at - 386 upgrade. There is also a speculation thread about upgrading the older units with 8088 CPU's

    But I'll write key updates here also.

  10. #9
    Join Date
    Sep 2002
    Location
    People's Republic
    Posts
    5,970
    Post Thanks / Like
    Likes (Given)
    715
    Likes (Received)
    3713

    Default

    What I was getting at[but I am no computer nerd] is that processing speed is not, I think, the main impediment. data speed into the machine and the machines buss speed would seem like more of an impediment.

  11. #10
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    Servo control data speed is related to general data bus speed, hence computing speed and clock.

    There is a "machine bus" that has 3MHz and 1.5MHz clocks from the CPU cards, coming from an oscillator "only" for this purpose.

    Turns out athat the 3M clock is being used by the LS drive boards and 1.5M is used for RM boards which I have. We have studied the signal paths, and am now planning to use the same 3MHz speed for the RM boards. Components are similar to the LS, so there should be no ill effects.
    What surprised me is that there is no clock signal in conjunctionin to forming the analog signal -10 0 +10V) for the servo controllers.

    Now that there is way more computing power available and theoretically the servo control speed is increased, if would be beneficial to increase the rate of incoming feedback signals (linear scales and servo feedback).

    These 1.5M and 3M frequencies are legacy from the 432 control with 8088 CPU

  12. #11
    Join Date
    Sep 2002
    Location
    People's Republic
    Posts
    5,970
    Post Thanks / Like
    Likes (Given)
    715
    Likes (Received)
    3713

    Default

    Again, I could be wrong, but I think processing speed is not the issue, but data throughput. I am thinking how they moved ram onto the CPU die, partially to remove bus roadblocks but also simply the amount of time spent pushing data across 6 inches of wire starts adding up when you have to wait for it to get there to process it.

    Hook a giant internet pipe to a 486 computer and I don't know that you could stream netflix, because of internal bus speed issues. You never noticed it on dial up...


    One of the best ways to speed up your old cnc is to optimize your fusion post[if the control can make a circle, don't let fusion make it 6000 line segments], and don't do silly things like helical ramps that require 200 lines of code.

  13. #12
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    Well, the fact is that after increasing the computing power three things happened:
    - BTR buffer doesn't run out
    - "Jerking" decreased a bit
    - cycle time for the same program reduced from 13min to 11:45, mostly because of no halts due to data starvation.


    Data bus is slow, but still the increase of CPU clock rate fastens that, too, as the CPU / DMA data bus directly controls the A/D transformer that control's the servo's - or not directly, but the speed is relative to CPU clock signal as the ready and wait logic have the same clock source, for example.

    Feedback /RM drive modules input scan speed is only 1.5MHz, but the next step is to raise it to 3MHz, the same as LS drive modules use. Those have the same 8253 chips and data channel multiplexing chips as the RM's. And, the 8253's are being used in the 8088 CPU modules at 6 and 8MHz speed.

    This can have adverse effects, but soon we will have the info

  14. #13
    Join Date
    Mar 2014
    Country
    FINLAND
    Posts
    35
    Post Thanks / Like
    Likes (Given)
    8
    Likes (Received)
    3

    Default

    And to be noted, the test progran is a 13k line 3 axis contouring program full of ridiculously short G1's.

    The Fusion post I use allows G2/3 helical moves, as well as using G2/3 to form circle movements and I use "concentric circles" wile spiral moves, but that isn't enough as Fusion CAM makes weird miniscule Z movements, and ramping doesn't allow concentric mode at all -> G1's. If only it could take advantage of Maho's geometric programming, but I'm not up to the task... I only do hardware


Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •