Saturday, May 18, 2019

Learning LTspice on a Mac

The best way to learn an electrical circuit is to simulate it. Up until 2010, PSpice (Personal (computer) Simulation Program with Integrated Circuit Emphasis) was the tool of choice. But then PSpice got purchased by Orcad, which made PSpice part of a huge software package and made the free version only available on a limited time basis. Orcad then got purchased by Cadence. Schools started to look at other vendors such as National Instruments (NI). If schools already had a site license with NI’s LABVIEW, then it was only a matter of paying a few extra hundred dollars a year to acquire the license to NI’s MultiSim, which also provides a one year evaluation version for all students. This may explain those technology fees that students see get added to their tuition bill. Those outside of the classroom or students whose schools don’t have software licensing should consider LTspice, which is truly free and is supported by Linear Technology, which was recently bought by Analog Devices. It has all of the features of the original PSpice software and Analog Devices keeps adding models for their own new chips (op-amps, voltage regulators, etc.). Special thanks to my former college professor on this background of PSpice.

LTspice has a version for Mac at website. In this example, an electrical circuit with a current source was used as a model to explain the thermal dynamics of heat sinking for a metal-oxide-semiconductor field-effect transistor (MOSFET), which can be used for power switching applications. A MOSFET generates heat during its switching and run stages, which results in the build up of heat. Because MOSFET's have temperature limitations, this heat must be dissipated. A MOSFET is surrounded by different types of materials which serve to dissipate this heat through convection. As the heat is dissipated, a particular temperature variance exists across the material. Therefore, these variances must be considered in the selection of materials that are needed to dissipate a certain amount heat. This variance per heat is known as the thermal resistance property which is specified as ℃/W. An element with a thermal resistance of 1 ℃/W means that a temperature variance of 1 degree exists across the element when dissipating 1 Watt of power. If a MOSFET has a thermal operating limit 100 ℃ and the ambient temperature is 40 ℃, then what is the heat sink’s maximum thermal resistance to prevent overheating of the MOSFET, considering that the junction thermal resistance is 1.03 ℃/W and case thermal resistance is 1.09 ℃/W?

The action of having temperature variances across materials could be thought of as a voltage drop across a resistor. Temperature could be thought of as voltage. Temperature variance per power could be thought of as resistance. Power could be thought of as a current source. Ambient temperature could be thought of as a battery source. With these analogies, an electrical circuit could be created to model the temperature variances across different materials. It is important to realize that the electrical circuit is only a model and is not an actual electrical circuit. When the circuit is run through the simulator, the operating temperature of the MOSFET can be determined. Also, the circuit model uses a current source, which establishes a fixed current throughout the circuit regardless of any voltage elements in the circuit. So, circuit analysis must be based on current source behavior.

A MOSFET is typically surrounded by three surrounding layers of materials that can dissipate heat: junction, case, and heat sink. The junction typically has a thermal resistance of 1.3 ℃/W and the case typically has 1.9 ℃/W.  Since the MOSFET cannot exceed 100 ℃, the circuit can be set to run at max conditions. The combined thermal resistances of all heat transferring materials must have a temperature variation variance of 60 ℃. With the circuit drawn in LTspice (see below), Ohm's Law was used to calculate a maximum thermal resistance of 15.982 ℃/W for the heat sink. This value was plugged in the circuit and simulated. The voltage node Tj, which represents the temperature at the junction, was found to be 100 ℃! This proves that the MOSFET is not overheated. Overheating can only happen if the MOSFET's total power losses go up (the current source in the model) or if the thermal resistances of materials go up (the resistors in the model). This is why it is important for materials to have high thermal conductivity which is the reciprocal of low thermal resistance.

Article by DScrobeIII



Saturday, October 27, 2018

Learning Java on a Mac

Farrell states it is a tradition that programmers write their first program in a particular language that produces the expression “Hello, world!” (2016). Performing such a tradition for the language of Java will be the second time I have had the pleasure of undertaking in the world of programming. The first time was in 2003 at Penn State, Berks Campus, where I created machine code for Motorola's 68000 microprocessor in the Associate Electrical Engineering Technology program. In this discussion, I will explore some basic knowledge on how to get set up using Java on a Mac and will demonstrate my first Java program that computes a cylinder's area and volume based on a user's prompt for radius and length.

The process of writing a computer program begins with the use of a simple text application such as Text Editor to formulate statements, called source code, using certain rules of the Java language. The text file is saved as a .java file. The whole point of creating source code is to prepare for a smooth interface between the human and computer languages. The Java Compiler, provided by a Java Developer Kit (JDK), converts the source code in to a binary format called bytecode and creates a new .class file. When the program is run, the Java Virtual Machine (JVM) performs a security analysis of the .class file and translates it into machine code, which is a set of instructions that acts upon the computer's operating system to produce a desired result.

The three types of errors that may be encountered in program construction are compile-time, run-time, and logic. A compile-time error is the result of the improper use of the Java language syntax rules. Because of this violation, source code is not able to be converted into bytecode. A run-time error occurs when machine code is created successfully but the program fails to execute. A logic error, the most difficult to resolve, occurs when the machine code is created successfully and the program runs but the execution produces undesired results. For example, consider the programmer who wants to add two numbers together. The programmer would realize a logical error has occured when discovering that the program was actually multiplying the two numbers instead of adding them.

I use a MacBook (13-inch, Mid 2010, Model Identifier 7,1), with a Core 2 Duo 64-bit processor that runs on the 64-bit MacOS High Sierra Version 10.13.6. To write my source code for calculating the area and volume of a cylinder, I used the application Text Editor, which comes with the operating system (OS). When finished writing, I saved the file as a .java file, which can be done by viewing the file extension in the File Save window and replacing .txt with .java.  A JDK is required to compile and execute the program. Therefore, since I have Java Version 8 installed, I downloaded Java JDK Version 8 from Java's website. I used Terminal, which comes with the OS, to compile the source code and have JVM run the program. The command to compile is javac followed by a space and complete file name for the source code. If compiled successfully without any errors, a new file is created with the same file name but with .class file extension. The command to run is java followed by a space and just the file name without an extension. JVM is actually loading and running the bytecode in the .class file.

Using Text Editor and Terminal is a very simplistic, bare bones way to writing programs but it does give you an appreciation for the basic requirements and a greater awareness of the process steps. Some other commands in Terminal are date (display time and date), ls (list the directory contents), pwd (show current directory), and cd (change directory). It is important to note that a PC uses a backward slash ( \ ) for file paths and a Mac uses a forward slash ( / ). To improve and speed up the process of writing sourcode, I downloaded a 32-bit application called Text Wrangler. This application color codes the different functions of the source code but you still have to use Terminal to compile and check for syntax errors. The ultimate experience is to download an Integrated Development Environment (IDE) application. An IDE will automatically check for syntax errors as you type and automatically compiles the code behind the scenes when you choose to run the program. I opted for NetBeans.

The purpose of the blog is just to show how to get started with running a Java program on a Mac. All that is needed next is a good textbook on the topic in order to learn the many tools that are available.

References:

Farrell, J. (2016). Java programming (8th ed.). Boston, MA: Cengage Learning.

The good and the bad of Java programming (2018, August 9). Altexsoft. Retrieved from: https://www.altexsoft.com/blog/engineering/pros-and-cons-of-java-programming/





Saturday, September 15, 2018

Power Line Carrier

With the use of modern communication assisted relaying on transmission lines, understanding how power line carrier protection schemes operate and investigating transmission line outages seems to be a lost art. There are many work groups, each having a specific skill-set, that play a role in how transmission equipment is handled. Management of transmission equipment seems to break down in to 5 areas of major focus: design, project management, operation, protection, and maintenance. Crews of transmission equipment tend to break down according to the type of equipment: line, substation, communication, and relaying. Utilities that own bulk electric system (BES) equipment are mandated by the federal regulatory body, North American Electric Reliability Corporation (NERC) to identify and correct the causes of misoperations of protection systems per standard PRC-004. This requires a great deal of collaboration among the different categories of management and crews. This article focuses on the troubleshooting of a misoperation on a 115 kV two terminal line that uses a directional comparison blocking (DCB) scheme to get a greater understanding of how power line carrier (PLC) operates, how to troubleshoot, and which work groups are affected.

The word carrier implies that the conductor of a transmission line is being used to transmit an electric signal at a frequency other than 60 Hz to convey certain information about the current state of the electric system. In the case of DCB, only two bits of information can be conveyed by one terminal letting the other terminal know if fault current is either flowing away from the line or into the line. This is accomplished by either the presence or absence of a signal. A terminal that receives a signal implies that the other terminal sees fault current away from the line and a terminal that does not receive a signal implies that the other terminal sees fault current towards the line. This is where you get the words "directional comparison" in DCB. A terminal gets to know which out of two different directions that fault current is flowing at the other terminal. The reason why a terminal needs this information is because it is difficult for the terminal's line relaying to determine fault location at the other terminal. A location on the line and just outside the substation is just a short distance away from a location that is away from the line but also just outside the substation. Line relaying is very good at differentiating locations that are close to it but not so well at locations that are many miles out. Therefore, the purpose of DCB is to prevent over-tripping of additional lines. If a terminal happens to see fault current onto his line and receives a carrier signal, then he is essentially being told by the other terminal, "Hey buddy, you just sit tight, don't do anything, I see the fault current as well, and I got things under control on my end." As a result, the terminal receiving the signal ensures that tripping of its line breaker is blocked. This is where you get the word "blocking" in DCB.

This year I was asked to investigate a 115 kV line that over-tripped twice for a fault that took place two lines out. The carrier signal operates at 96 kHz. Sequence of event (SOE) information that was gathered shortly after these incidents seemed to show that Station A tripped because it did not have the presence of a carrier signal from Station B. SOE data can come from digital fault recorders (DFR), microprocessor relaying, and signal processing equipment (carrier set). Relay and electronic crews gather this data and protection engineers analyze it. This was the first time I have ever gotten involved in an investigation of an over-trip due to carrier issues, so I had to consult equipment vendors, design engineers, protection engineers, and electronic technicians.

The first step in troubleshooting an over-trip due to a carrier issue is to ensure that a terminal sends a signal when it is supposed to. A line relay that sees current flow toward the line and away from the substation is considered forward direction. A line relay that sees current flow away from the line and towards the substation is considered reverse direction. Therefore, a test set can be used in the line relaying circuitry to simulate reverse direction to prove that carrier signal is sent. A relay crew and a field engineer would typically perform this work utilizing a test plan that is created by a protection engineer.

The second step in troubleshooting is to perform manual check-back testing of the signal. This is done by having an electronic technician at each terminal. Signals are sent and received to verify that a terminal receives a signal when the other terminal sends it. Signal power levels are measured to ensure that the carrier set will operate at the signal power received and to determine if there is any considerable attenuation of the signal in the various parts of the circuitry that the signal travels through. This is where I come in.

The third and final step in troubleshooting is to check for timing and discontinuities of the signal and verify the functionality of signal processing and relaying equipment by performing what is called end-to-end testing. As in the first step, a relay crew and a field engineer would typically perform this work utilizing a test plan that is created by a protection engineer.

It is important to understand how a signal is sent from one terminal to the other. There is a lot of equipment to consider. You basically need a way to generate a strong enough signal to overcome any losses it will suffer during transmission (transmitter), match the different characteristic impedances between coaxial cable circuits and transmission lines (impedance matching transformer), select for the particular signal frequency (LC series pass filter), couple it to the line (coupling capacitor potential device), ensure is travels only on the line (line or wave trap), and then couple it back to the carrier set at the other terminal. It is possible that signal power levels can be severely attenuated at certain parts in this signal path. If levels are too low, then the carrier set may not actuate during signal transmission. The picture provided shows an example of typical attenuation that takes place on a transmission line. You can see that the transmitter output of 10 Watts has already been cut by more than half when leaving the coaxial circuit!

In this example you can see that the power level of the carrier signal where it enters the RFL 6785P carrier set was +16 dBm, using the PowerComm PCA-4125 Power Communications Analyzer. This was -15 dB compared to carrier set's nominal level. There is a 12 to 15 dB margin that the carrier set will respond to. So the -15 dB reading was just at the edge of this margin. Two years ago, the signal receive strength was +28 dBm. So, something changed on the transmission line and carrier equipment to cause more power loss of the carrier signal. After extensive testing, this factor could not be identified. Therefore, the new power level was accepted as the norm and the carrier set was recalibrated to accept this level as the nominal. To explain what happened during the system event, the vendor stated that the carrier set actuates for a certain dB margin and that fault current may compromise the carrier signal. EPRI states that "during the occurrence of a fault, there may be additional noise generated by the impulsive voltages and currents associated with the fault" (2017). Therefore, it is very likely that Station A received the carrier signal when it was supposed to but unfortunately the signal power level and signal to noise ratio was too low until after the fault current was dissipated by the tripping of the breaker. This resulted in a over-trip and misoperation because the breaker operated when it was not supposed to, due to the carrier set being late in actuating the block function of the relays. Now, with the carrier set calibrated to the new signal power level, the carrier set and relaying should respond appropriately. This was confirmed by performing end-to-end testing at a later time. A project was created to install a carrier signal meter to alert transmission operators when signal power levels are too low during the daily, automatic check-back tests of the carrier signal.

I highly recommend the use of the PowerComm 4125 by technicians when it comes to troubleshooting signal levels! Performing routine signal strength testing allows a utility to ensure its carrier equipment is properly calibrated.

Article by Dan Scrobe

References:

EPRI AC Transmission Line Reference Book-200 kV and Above, 2017 Edition. EPRI, Palo Alto, CA: 2017. 3002010123.



Saturday, March 31, 2018

Bushing Health Monitoring



It appears that all new capital equipment that is purchased for distribution and transmission systems is not with just the intent of replacing aging equipment but also with the intent to gauge the expected lifetime cycle of the new unit. This allows for an increase in planned outages to repair/replace parts that are soon to fail and a decrease in unplanned outages to address parts that have already failed. With transformers, bushing failures can be catastrophic to human life and to the integrity of the transformer. One way to monitor the health of a transformer bushing is to measure the capacitive reactance of the series capacitances within the oil impregnated, foil layers that surround the electrical conductor. This monitoring is done by advanced electronics such as performed by Dynamic Ratings. Sometimes the monitoring systems can give false alarms. The aim of this article is for the field technician to understand how the bushing health monitoring system works and how to perform manual in-service measurements to verify the legitimacy of an alarm.
 
Transformer bushings will typically have a connection point to the outer foil layer. This connection is called the C2 tap and gets shunted to ground via the installation of its cap cover. For out of service testing, when the cap is removed, the equivalent series capacitance from the conductor to the tap is called C1 and the capacitance from the tap to ground is called C2. Both C1 and C2 are given on the nameplate of the bushing in units of picofarads. Doble testing takes advantage of the C2 tap to measure the capacitances of C1 and C2 and the Watts loss of the bushing by applying a test voltage to the bushing terminal. Dynamic Ratings basically performs this Doble testing while a transformer is in service by removing the cap and shunting the C2 tap to ground via a resistor or "burden." The Dynamic Ratings system will constantly measure the voltage across and the current through all three resistors (three phase system) for a set of bushings such as H1-H2-H3 on a transformer. If one bushing begins to fail, the magnitude of the sum vector will breach a set circular zone called the bushing intolerance limit, which can be adjusted via the settings of the monitoring unit.

In this test, by knowing line voltage, frequency, burden resistance, and nameplate capacitances of C1 and C2, a field technician can determine to some degree the health of the bushing via voltage measurements taken by an Arbiter. In this example, line voltage is 69kV, frequency is 60 Hz, and the burden is 500 ohms. The reference chosen was A phase on the low side of the transformer. A potential transformer is connected to 7.6kV A phase and its 120V output provides a signal to the load tap changer controller.  Because C2 has a high impedance at 60 Hz and is shunted by only a 500 ohm resistor, C1 and the burden resistance can be essentially considered a series circuit and the voltage divider rule can be used to calculate the voltage across the burden. Therefore, by using Ohm's Law calculations, the voltage across the burden was calculated to be 1.8V. And since the equivalent impedance between a bushing terminal conductor and ground is mostly capacitive, it was expected that the burden voltage would lead the system phase-to-ground voltage by 90 degrees. The Arbiter proved these calculations. If one of the bushings were going bad, where foil layers within the bushing would begin shorting together, then the capacitance of C1 would go up, the capacitive reactance of C1 region would go down, and the voltage across the burden would go up. All burden voltages Ax, Bx, Cx are under 1.8V and evenly spaced about 120 degrees apart. Therefore, the magnitude of sum vector for all three burden voltages would likely be very small.

Article by D Scrobe III







Sunday, May 1, 2016

CT Secondary Test Current Injection Methods


I have noticed in the field various ways of injecting test current in a current transformer (CT) circuit during pre-energization, commissioning checks.  The purpose of injecting test current is to prove the integrity of the entire CT circuit that is connected to a CT, once all the burden elements have been individually tested.  This article covers the methods of each.
  
Before doing any current injection, the circuit should be checked for only one ground connection on the neutral circuit.  If there is more than one ground connection on a CT circuit, the neutral could have unintentional ground current flow and cause relay schemes and metering to function improperly while the equipment is in service.  This check can be done by "ringing" out the circuit to a grounding point, near the location of the CT termination board.  Then, you would remove the designed ground connection on your neutral circuit and attempt to ring out the circuit to ground again.  If you lost the ground connection, then you know you only had one ground.  If you still have the connection, then you have another ground you need to go look for.
   
The preferred method to inject CT secondary test current is to break away the circuit leads at the CT terminal board and connect them to a test set.  Then, the open CT would be shorted down and grounded.  The designed ground connection on the neutral circuit would be removed, since a test set would provide a ground point on the common leads.  It is typical to have the test set inject .5A, 1.0A and 1.5A on the phases and then going to each of the circuit elements to check that the current magnitudes are reading correctly for each phase and the neutral.  The drawback of this method is that the technician needs to make sure that all the leads are connected back properly.
   
An alternate method, similar to the preferred method, would leave the CT circuit intact and "piggyback" the test leads onto the circuit, thus creating a parallel path for test current to flow.  This is demonstrated in the first picture.  Here, a 69/13.2kV delta-wye distribution transformer uses a SEL 551 relay to provide overcurrent protection.  The 69kV phase CT's and the transformer ground CT provide inputs to the relay.  The current in the neutral circuit for the phase CT's is calculated and called IG by the relay.  A Doble F6150 test set is used to provide test currents.  When pushing current the piggyback way, almost all of the test current will travel through the relays since the high inductive impedance of the CT causes it to act as an open circuit.  The small current that goes through the CT is excitation current and depends on the secondary voltage across the CT.  The reason why this is not a preferred method is that when doing testing, you want to isolate any unknown variables that might interfere with your test results.
   
Last, an old-school method would require you to only have a 100Watt light bulb with some leads attached to it.  A test set would not be needed.  I don't recommend this but I have come across technicians who utilize this method because of its versatility and ease of use.  I believe old-school methods are the best way to learn how tests are run.  With modernized test equipment these days, a technician could lose sight of the principles behind a test.  My Father-In-Law, a retired 40-year relay technician, shared a story with me that hits home on this.  When he was training an apprentice, they were well on their way to a job site to perform CT testing, when the apprentice frantically realized that he forgot his CT test set.  The trainer ignored this and continued to travel on.  When the apprentice asked why they weren't turning around, the trainer asked the apprentice if he had his Variac, voltmeter and ammeter on him.  Those items were obviously on hand and the apprentice learned how these individual items represent the different functions of a CT ratio/excitation/polarity test set.
   
To use the light bulb method, the neutral ground connection must be left on.  One lead from the light bulb would clip onto the hot leg connection of a 120VAC source and the other lead would piggyback onto one phase of a CT circuit.  Once this connection is made, a circuit loop is created in which AC current travels from the outlet to the light bulb, through the phase CT circuit and then to the designed neutral ground connection.  This is demonstrated in the second picture.  I added test switches in my drawing for one of the relays and these would be closed.  The 100Watt light bulb serves three purposes: 1) limit current flow 2) indicate current flow and 3) demonstrate a set current magnitude.
   
Since typical push currents are around 1A as explained above, the light bulb would limit the current to .83A (100Watts / 120V = .83A).  To indicate that current is actually travelling through the circuit, the light bulb would light up.  To demonstrate a set magnitude for the relays to read, each element in the circuit would be checked to see if it is reading .83A.  This way you know the relay is in the circuit and is metering properly.
   
Although the first method is preferred, it is always a good idea to learn all methods and understand the benefits and drawbacks of each.  This allows the technician to have a better understanding of the nature of the test, have multiple means to perform a particular task and to have a better ability to troubleshoot.
   
Thanks to Edler Power Services and Relay Protection Group on LinkedIn for shared discussion on this.
   
Article by D Scrobe III



  


Monday, November 2, 2015

Providing Customer Voltage


Mains electricity is the term used to describe the common household electric power supply in the United States, being 120 Volts (V) at 60 Hertz (Hz).  Electric utilities are required to provide this supply base within certain tolerances per tariffs imposed by state utility commissions, which verify compliance with state regulations.  For example, the Pennsylvania Utility Commission (PUC) verifies compliance with the Pennsylvania Code for Electric Service.  Since voltages constantly fluctuate with varying electric loads on the electric system, utilities employ devices at the substation and on distribution lines, such as load tap changers, voltage regulators and capacitors, to help regulate the voltage that is provided to the customer.  When determining proper secondary voltage of distribution transformers in substations that feed these lines, it is important to understand the substation application of 120V.

Based on the PA code, the allowable variation in voltage measured at the service terminals of a residential customer may not exceed 5% above or below 120V.  Therefore, the allowable voltage range for a customer is 114-126V.  Since loading causes voltage drops along a distribution circuit, it would be ideal to set the secondary voltage of a substation distribution transformer as high as possible but still be in tariff, in that customers at the beginning of the circuit would receive 126V and customers at the end of the circuit would receive 114V.  This variation is smoothed out thru the use of voltage regulators and capacitors throughout the circuit so that all customers receive as close as possible to its nominal voltage supply.

To show what the secondary voltage of the transformer is, an auxiliary potential transformer (PT) steps the secondary voltage down to 120V.  In the example, the PT has a turns ratio of 8,400:120.  If a substation inspector observed 120V on the voltmeter, the secondary voltage of the transformer would be at 8,400V.  A reading of 115V would indicate that the voltage would be lower than 8,400V and a reading of 125V would indicate that the voltage would be higher than 8,400V.  Therefore, the use of the 120V voltmeter is a substation application that indicates nominal voltage and should not be confused with the 120V that a customer would receive.

Rated voltage is stated on the transformer nameplate.  It provides rated voltages for each setting of the de-energized tap changer (DETC), which changes the turns ratio of the transformer by either adding or subtracting turns of the primary winding.  Set while the transformer is out of service, the DETC typically has five settings and shorts out more turns as the setting number increases.  Therefore, the higher the DETC setting, the less stepping down of primary voltage to secondary voltage.  The nameplate voltage of the transformer is taken from DETC position 3.  In the example, a delta-wye distribution transformer has a nameplate voltage of 69,000V primary - 13,200Y/7,620V secondary.  Its DETC was set for 5.  Therefore, in effect, the transformer is rated for 65,600V primary - 13,200Y/7,620V secondary.  By moving from 3 to 5, less turns are left in the primary winding, resulting in less stepping down of voltage.

The next thing to consider when determining proper secondary voltage at a substation is the utility's standard for the turns ratio in all pole-top, pad-mount and underground transformers, on a particular distribution circuit, that transform the distribution line voltage down to 120V for customer use.  In the example, the distribution circuit's nominal voltage is 13,200Y/7,620V.  Customers are connected phase to ground with transformers that are rated 7,620V primary - 240/120V secondary.  Therefore, all the line transformers have a turns ratio of 7,620:120, which is different than the 8,400:120 for the PT at the substation.  Since customers close to the substation can go up to 126V, an ideal secondary voltage at the substation transformer would be 8,001V, which would indicate 114V on the PT's voltmeter.

To help regulate the secondary voltage at the substation under varying loads, load tap changers (LTC) are employed on the secondary side of the distribution transformer at the substation.  It is important that the LTC pass thru the neutral tap when loads change from peak to off-peak and from off-peak to peak, to allow for proper wiping of the contacts of the reversing switch.  Therefore, protection engineers calculate the proper secondary voltage of a transformer at what would be needed at the neutral tap.  The LTC uses the secondary of the PT to sense when it would need to make an adjustment.

The picture shows how to determine what DETC position should be used based on how a nominal 69kV subtransmission system is normally operated at and what secondary voltage would be needed for proper voltage regulation by the LTC.

Article by D Scrobe III




Monday, October 5, 2015

Phase Rotation on Delta-Delta Transformer


  A recent project of mine involved installing a 34.5/4.8kV delta-delta mobile transformer in order to remove from service a distribution transformer from service and repair the metering.  The mobile was parked directly under the 34.5kV line to tap from.  The substation crew preferred to bring the high side leads straight down to avoid any crossing of phases without taking into consideration which phase gets connected to each H terminal.  Surely phase rotation of the sub-transmission system should dictate how the three phase power is connected, I thought.  When putting the mobile into service, it would momentarily be in parallel to the distribution transformer, so it would be prudent to determine proper high side connections of the mobile.  Apparently, it does not matter how you wire the high side of delta-delta transformers.  Whatever phase conductor gets connected to a H terminal then that phase is assigned to the corresponding X terminal.  To understand why you can get away with this, you need to ask just what really is phase rotation?

   There are many analogies to explaining phase rotation on three phase power systems but my favorite is the playground, merry-go-round.  Imagine placing three kids on, evenly spaced apart around the edge or 120 degrees apart.  Pretend each kid is a particular phase of a three phase transmission line.  As you are facing the center of the merry-go-round, if a kid is directly in front of your view, then that phase is at zero potential.  If a kid is to your left, then that phase is at negative potential.  If a kid is to your right, then that phase is at positive potential.  When you go to spin the merry-go-round, you are going to experience a certain sequence of kids passing you.  This is called phase sequence and can only be one of two possibilities, A-B-C or A-C-B, depending on whether you spun the merry-go-round in clockwise or counter-clockwise direction.

   This is not to be confused with the actual spin direction of generators, which drives three phase power on the transmission grid.  Obviously, a generator wouldn't stop and spin in the other direction for the sake of obtaining a different phase sequence.  Phase sequence depends on how phases are marked.  For example, Hosensack Substation in PA is an interface between two different transmission owners, Met-Ed and PP&L.  What is phase marked on one side of the interface is not necessarily phase marked the same on the other.  The conductor that is marked A on one side is marked A on the other.  However, the conductor that is marked B on one side is marked C on the other.  Also, the conductor that is marked C on one side is marked B on the other.  The actual physical conductors and equipment that run through the interface are the same but the labeling of them is what is different.

   Now imagine the merry-go-round again and use to picture how the vector groups of transformers rotate.  Met-Ed assigns phase labels so that an A-B-C phase sequence is always experienced.  Therefore, according to the picture, rotation of the vector groups of the H terminals on a transformer is dictated by the phase sequence of the system that the transformer is connected to.  The picture shows what would happen if A and C phase got rolled.  Although the rotation of the vector group would change, the sine waves are identical.  Therefore, phase would work when comparing the low side voltages of both transformers and customer load would receive the correct phase sequence to ensure that three-phase loads such as motor would spin in the proper direction.  X terminals are not shown because in this example, the secondary voltages of this particular delta-delta transformer are in phase with the primary voltages.

   Article by D Scrobe III