Jump to content

RR503

Veteran Member
  • Posts

    3,108
  • Joined

  • Last visited

  • Days Won

    109

Everything posted by RR503

  1. No, it definitely is too much. No rational human being would want to work in the conditions Byford faced -- incessant and uninformed interference from above, etc. Moses was a master of his craft: he built a coalition that would support road construction independent of political leadership, and then carefully avoided angering any constituency with power. It is telling his downfall partially stemmed from LoMex -- the first one of his highways to traverse a large swath of middle class neighborhoods. In the end of the day, though, it's the lack of vision that's going to kill us. Seemingly nobody with power in this region knows what they wish to work towards. Without goals, why make change?
  2. I was at 2 Broadway today for a meeting; arrived just a few hours after the news went out. The number of shocked faces, of employees furtively talking in hallways, the amount of sadness in that building was remarkable. The fate of the agency's operations aside, I and many others are extremely concerned about how Byford's departure -- both in that we are losing his leadership and in that competence and creativity are being rebuked -- will affect the long term health and culture of the organization. The MTA is already risk averse and terrified of proposing change; I fear those tendencies will only get worse.
  3. It makes me a special sort of upset to watch politicians -- those who hold the purse strings make declarations like when it's really the fault of those very pols that this is the case. Like, setting aside how insufferably all-or-nothing that statement is, who do they think makes these decisions? Same goes for the members complaining about service cuts. The whole point of redesigns of this sort is that they put resources where they will generate the most ridership and create the most benefit; there are going to be more winners than losers, but there will be losers. Saying "OMG this plan cut service in x area" is like saying "water is wet" in the case of a budget neutral redesign. If anyone cared to fix the budget neutral aspect, then, well, the win/loss issue would tilt further towards the former. This sort of behavior, taken in concert with today's news, _really_ makes me worry about the future of transit in NYC.
  4. Aside from the merge being between rather than , there is no difference in the number of merges in your plan vs the one put forth by NYCT, and NYCT’s better serves Nostrand and doesn’t shove all of Lex into one mediocre terminal. As for options at Utica, you can transfer, which in my experience is what most people seem to do anyway.
  5. Flatbush can do (likely a good bit) more than 18 if you recrew faster and schedule tighter. A not insignificant amount of time is consumed via pocket dwell today. They use both pockets at Utica AFAIK. Policy would definitely help at Utica, but it's unclear to me how much can be changed without also tinkering with schedules to absolutely minimize time in pockets. It's also one of those things where the other service alternative provides a better overall experience -- at least someone new gets a one seat ride up the East Side.
  6. I agree! Luckily, there's a simple fix for Rogers: add two switches -- and no, there is room to do that. Multiple NYCT studies can confirm. I don't follow how a lack of turn locations factors into this. to Flatbush, to Utica, to New Lots
  7. That, health/pension costs, and the stagnation/decrease in labor productivity (wages, which get all the attention from the Post crowd, are not really an issue).
  8. This is SOP with CBTC -- it allows faster acceleration and braking to be set to basically whatever level. Already CBTC operates trains significantly faster than does fixed blocks; here's to that being expanded. Would you rather the train sit outside the terminal for 10 minutes a la ? It's annoying, but until runtimes are stable and the agency can find the resources to rewrite schedules to reflect runtime gains, this is what we've got.
  9. $0 of congestion pricing revenues will go to the operating budget. It's being bonded to pay for the shiny new objects in the 50 billion dollar capital plan. The operating budget remains in tatters. As for the lockbox, the bill as written merely prevents funding from being removed from the MTA, it doesn't guarantee any adds. The long-term utility of just more money is questionable at any rate; MTA's operating costs on a per service-hour basis as well as in aggregate have been escalating wildly ahead of inflation over the past decade or so. Without adequate cost controls and resource prioritization, adding more would just amount to kicking the can down the road.
  10. I don't think CI needs express service. There are one or two new devs coming up down there, but the center of ridership growth on Brooklyn routes over the past 15 or so years has been in Sunset Park, Park Slope, Carroll Gardens, etc, and without other routing interventions, the Dekalb/Manhattan/Queens segments of these routes basically mean they're maxed out, save for maybe 1tph here or there. For example, on the , while back in the '70s only about 60% of ridership was on the IND portion of Culver, today over 75% is. Obviously in the specific case in the riders at Church and 7th would benefit from express trains, but generally speaking the center of line ridership is too far north (and is tending further northwards) to sustain express service at the expense of local service from Church north -- let alone from Kings Highway north. The same goes for any . On Sea Beach, even if _every single_ Stillwell rider used this and this ran local from Stillwell to Kings Highway, you'd only be serving 22% of Sea Beach ridership. The fares a bit better -- 28%, and 62nd could be a legit transfer opportunity if scheduled right -- but still, not great. And this is all before we discuss the operational barriers to some of these issues, whether that be the complexities that come with trying to run a short turn op and an express/local crossover without a grade separated junction (hello Parkchester), or scheduling challenges given time savings, or otherwise. None of those barriers are _intractable_ but they make an unattractive ridership proposition seem suboptimal operationally too. Really unless you can increase the size of the capacity pie, you're robbing one area to serve another -- and are making a poor tradeoff at that. My question is why not focus on non-express service improvements in runtime? There is a lot of messy signaling and merging on BMT south; cutting down some of that could save all riders time. Now, the above is qualified with the assumption that we can't increase the size of the capacity pie. But say we went ahead with ops fixes, and ended up with the ability to run 30tph down the 4th Ave express tracks. Then what? I'd suggest . West End express has intermediate stops, which allows for transfers and larger catchment, has bad-but-not-terrible short turn facilities at Bay Parkway, and is really quite fast. I don't think you'd necessarily want to split the evenly, but I can see a world in which it'd work.
  11. Well sure, but again in the base schedule it's written to work as you suggest -- the and can 'float' and end up perfectly in between the and service created by the Nevins transfer as they don't interact with lines aside from the and . In fact, excluding the , the only part of the A division that doesn't get even headways overnight is the Eastern Parkway corridor, where they've decided an easy transfer at Nevins > evenly spaced service. It's when running supplemented service, especially when running schedules that rewrite service on one of the two lines but not the other, that this all gets messed up. I don't see how countdown clocks prevent the scheduling of even headways? Or is your point that they may inform people as to how long they have before the train arrives if they're considering some untoward action?
  12. The base schedules on Lex and IRT West align and for a cross platform at Nevins and then interpolate the and to provide an even enough 9/11/9/11/9/11 pattern on the trunk portions of those routes, as intra-corridor ridership is high and transfer flows between trains aren't strong enough for the agency to prioritize a connection for one direction (ie giving => xfer riders a 3 minute wait would mean giving => riders a 17 minute wait). The and take sufficiently different routes that the calculus is different for them -- they run a 6/14/6/14 iinm. Worth noting that these are _base_ schedules. Supplements frequently throw this all out the window.
  13. No speed changes, though perhaps they recalibrated them... I'd be curious to know whether the speedup have a discernable startpoint, though. What I've noticed is that rushes on corridors like the Lex and QB will go okay enough until some train overdwells/is too timid with STs, at which point everything goes to hell. It's possible that as crews got moved around during the pick, the crew least comfortable with those conditions got moved around in the rush.
  14. The effect is much less pronounced in tunnels -- there generally is registered a good bit of travel time between stations. Regardless, given the somewhat constant direction of the error, if you know how to weed it out of your calculations (using arrival-arrival or departure-departure metrics) you can circumvent issue.
  15. This is exactly what I mean by elevated stops blending together. Because the system is beacon-based, near station ~ in station.
  16. Define "bad". Can you get precise estimates of dwell times and such; do elevated stops sometimes blend together? Absolutely. But if you know how to play with it right, you can extrapolate a lot of interesting info.
  17. No, they know when things depart and arrive, it's an issue with the structure of the feeds and the feeds' relationships with the various data sources that makes it difficult to get consistent data specifically around terminals (rest of route is generally quite good).
  18. Disagree. E180 isn't really an issue in the AM peak -- the clever use of B lead makes for little runtime loss or variability on that stretch sb when the is express. NB it's an issue which should certainly be looked at, but should likely be seen through the lens of A division capacity analysis when they get towards 0 day for Lex (and I expect IRT West) CBTC. The one they've come up with certainly has its risks, especially as it pertains to loads/dwell times, but is bidirectional, easy to implement, likely to be popular, and solves a nasty merge. Some data on E180; note the absence of a significant runtime increase in sb AM rush service, and the marked presence of one in the PM rush, one which aligns with the beginning of express service: Sb service: Sb service: Nb service: Nb service (the lack of overnight data is because for whatever reason the MTA's public data feeds are absolutely horrendous at reporting the times at which trains leave E180 when it's the service origin station):
  19. I don't doubt that performance was _similar_, but the lack of any review to determine a) whether that was the case and b) if it wasn't, what the safety impacts would be is concerning. My point was much more about procedure than actual risk. Page? Document isn't searchable so I can't say authoritatively, but the as far as I can tell the NTSB's tests and test results demonstrated both that the brakes were deficient and that a collision would have occurred if the brakes were not deficient, IINM. At any rate, car stopping distances at an arbitrarily assigned brake rate are not the metric we should use for evaluating safety, instead we should compare against the braking distance standard that existed before the collision, in other words NYCT's car performance safety minima for pre-brake degradation trains. I did this in my above post, but realize I spent very little time explaining what standard I was using, why, etc -- my apologies. As it so happens, the braking standard in effect for cars from 1948 to 1995 is the one we have today, and thanks to the STV report, we know that it requires a maximum braking distance from 35mph of 332 feet on flat rail. A 2.25% gradient over part of the run would shorten that distance, but not by >52 feet: from a starting speed of 34-36mph, the signals on the bridge were by NYCT's own guidelines incapable of protecting a train at MAS. Here's the braking standard used back then, which is the same as is used now. Also @Amtrak706, great post on field shunting! The acceleration mod made in 1996 was indeed to 100% field strength.
  20. https://new.mta.info/sites/default/files/2019-12/MTA NYCT Subway Speed and Capacity Review_Final Report.pdf Pages 49, 162
  21. To the best of my knowledge, the following is the history of the braking/signals issue. Fair warning: this will be long. The design of signal systems is a project that requires consideration of many variables. At what rate does your equipment accelerate? At what rate does it brake? What assumptions can we make about train operator performance? About switches? Etc. The root cause of the timer/control line extension problem is that NYCT did not stick to or retroactively apply consistent standards for all of these variables, which in turn caused degradation in the safety provided by the signal system (this is also basically the tl;dr of this post). In the oldest bits of the subway's signal system, safety was programmed around first-gen IND and BMT equipment -- your Standards, R1s and the like. As you all know now, acceleration performance was not governed to the performance of these older cars, and over time, the safety provided by the signal system was degraded. For example, while the R1 had a starting acceleration of 1.75mphps and a braking standard of 30 to 0 in 230 feet, the R10 (and all succeeding car classes, up to the R188) had starting accelerations of 2.5 and a 30mph braking distance of 250 feet. These changes alone ate into the level of safety provided by the signal system. Per the presentation I linked, this first round of non-compliance reduced signal safety by 20-35% depending on the location. To the point of actual danger in this stage of the problem's development, I am not aware of any crashes caused by control line deficiencies alone pre-1995, but given the spottiness of accident reporting and the difficulty of finding information on causes of the accidents which were reported, I would be wholly unsurprised if there had been a few. NYCT recognized the safety issues that train/signal performance mismatches created as meriting action as early as 1980, when a capital plan report enumerated replacement of legacy signalling to bring the system in line with modern standards as being a key priority for the future. However, aside from the use of updated signal design standards in the replacement of legacy signalling, little retroactive work was done to fix these issues. Then, of course, the braking debacle happened. The general outline of the history that @Amtrak706's linked SubChat post lays out is essentially correct. During the changeover from cast iron to composition brake shoes, the braking effort of trains was significantly reduced. Depending on who you ask, that reduction of braking effort was either inadvertent and unknown, understood but thought of as acceptable, or indeed was a sought-after change. Short of finding the people/documents that surrounded this decision, there is no way of knowing in absolute, though I tend to believe this degradation was caused by gross oversight, given that the IG report documents many basic safety analysis failures in that era (ex: nobody thought to check whether giving R32s 115hp motors was a good idea from a safety perspective). Intent aside, the braking effort reduction all but compromised signal system safety -- per the IG report, adoption of the degraded brake standard would have put fully half of the system's signals in non-compliance. This issue was known in well advance of the Williamsburg Bridge accident, but (in another demonstration of the unanalytical trust in the signal system) was deemed a tolerable risk, as the brake system degradation was thought to be within the signal system's safety margin -- something only true where margin hadn't been consumed by acceleration changes. We arrive at the WillyB disaster. The signal system on the bridge was designed around legacy car capabilities; the fated signal, J1-128, was designed assuming the maximum attainable speed of a train passing it was 27.9 miles per hour, and thus provided 270 feet of stopping distance beyond it. But of course, both dimensions of performance changed that morning. The MAS of equipment of 1995 passing that signal was in the 34-36 miles per hour range, and the braking systems attached to that speedy equipment were weaker thanks to the R10-era downgrade and the composition related performance problems. How much weaker? The accident train passed J1-128 somewhere around 34 miles per hour, and likely impacted the (brownM) ahead of it at 18mph. That was 288 feet from the trip arm of J1-128. So, change in velocity is 16mph, change in distance is 288 feet, area gradient is ~flat (see below diagram; impact occurred on a 2.25% upgrade, but that upgrade begins between 128 and impact, so even at the point of impact only about half of the train was on an upgrade which for our purposes can be equated with being flat). Plugging these values into our handy kinematics equations (after some unit conversions) yields a deceleration value of -2.1 mphps, or well below the specified -3.0, and the specified-from-30mph-accounting-for-various-confounding-factors average of -2.6. While poor braking undoubtedly played a role here, it is important to examine the counterfactual, or what would have happened if the train had performed per specification. Assuming a starting speed of 34mph and deceleration per the braking curve in the STV report (the one linked just 2 sentences ago), the train would still have needed >>270 feet to stop -- on flat rail it'd be ~330 feet, but at that point the gradient effect would be nontrivial. It would have collided with the at a lower speed, to be sure, but the signal system was still fundamentally unsafe. Diagram, as promised (h/t @RailRunRob); for how to read see here, and don't miss the note about all control lines being two-block control unless otherwise shown. These deficiencies in deceleration and excess in acceleration were the study of a number of reports around the time of the Williamsburg Bridge disaster, some of which were summarized in the two documents I provided (this is what I was referencing when talking about risk analyses with normal brakes btw @Amtrak706). Again, as shown below, even with good brakes, a significant portion of the system's signals were out of compliance with car performance. Here's the slide on the West End line before its late 90s resignalling -- "improved braking" means restoration of the 30mph/250' standard. The preponderance of locations where sub-100% safety was provided was the major motivator behind the 1995-present signal mod effort; it became adopted policy to rectify all of these deficiencies (a list, mind you, that grew with time) to at least a >100% safety level. (For more on this subject, see the discussion towards the end of the MTAIG report, as well as the latter third of the powerpoint.) You think this post is done, right? Lol no. There's the equally important issue of standards to cover; I will attempt to be more brief here . Just as train performance has changed over time, assumptions about train operator performance have changed over time. The two biggest changes therein were assuming trains went full speed through stations and that trains didn't comply with posted speeds. As I'm sure I've mentioned before, the original 1960s signal design for the Grand Central area on Lex did not enforce any curve speeds with timers -- signs only. Timers were only installed in the '70s, IINM. It goes without saying, but relying on signs alone to slow express trains approaching GC in excess of 40mph to speeds in the 15-20mph range for those curves is not something that would be even remotely permissible today, and indeed has been the driving force behind many GT adds. The assumption of 15mph exit speeds equally had impacts on signal design. If you browse through the IND signal prints on nycsubway.org, you'll notice that leaving signals always have two block control and provide little stopping distance beyond the second signal. You'll also notice that in-platform ST signals with cutbacks that reach beyond the platform (see, for example, on the 34 St print, signal B1-1035) provide precious little stopping distance beyond the end-of-platform-signal that would be the tripping signal (for 1035, that'd be B1-1031) if a train were to clear the ST cutback -- the dotted portion of the control line -- with a train stopped in the dotted portion. We no longer assume that trains slow at stations; the clearest manifestation of this change are the one shot GTs at the leaving ends of stations on lines like Pelham and West End. This assumption, too, has required modifications to the signal system, and has, along with the assumption of non-compliance with posted limits, driven the installation of DGTs at stations like Roosevelt Ave and Forest Hills where switches placed at the leaving end of the platform would have, under the assumption of 15mph leaving speeds, not required any enforcement. Other sorts of signal mods do exist, but the above cover the driving forces behind most of them: control line safety and train operator performance standards. Please correct me if you see mistakes in this. In closing, I want to _strongly_ emphasize that all the above should not be seen as an unqualified defense of NYCT's mod effort. Cheapness in mod design (subbing one shots for two shots, doing one shots instead of control line extensions and ST cutbacks, refusing to cut in additional signals or insulated joints to mitigate mod impacts, etc) massively worsened the impact of said mods, contributing to the runtime and capacity losses that have driven the system to where it is today, and lagging signal system replacement timelines have increased the impact of mod campaigns simply because there exist many portions of track where signal systems (and accompanying mods) that should have been retired ages ago are still in service. Operational rot of other sorts has increased the impact of mods beyond design. NYCT's well documented discipline culture and maintenance disorganization has frightened TOs into taking timers well below their posted speeds, while overlong dwell times, overly complex service patterns, poor terminal operation and flagging rules have aggravated the impacts of mods on system capacity and performance in other ways. None of these aggravating factors should be at all minimized, as an agency that emphasizes the negative impact of a 2 shot GT30 but simply accepts 75 second dwell times is an agency failing to see the full operational picture, as it were. I could write posts of equal (if not longer) length on these other ops issues, but I daresay this is enough for tonight.
  22. IINM some of this is covered downthread of that post, but the focus on inshot is unjustified given its peripherality to emergency braking in general and the degradation in specific — it was brake cylinder pressure mismatches during the cast iron/composition changeover that killed things. It invalidates it because the TA didn’t make mod decisions off of degraded brake performance. They made it after having restored brake function, a point at which (as you’ll see in the reports attached to my above post) there still existed significant numbers of safety deficiencies in the system. As I understand it, the changeover was rationalized in two ways. 1) operators were expected to be competent and not dead/sleeping/suicidal, and 2) acceleration increases ate into but did not generally surpass the margin of safety provided by the signal system. Both assumptions held true, and were not generally put to test because a LOT has to go wrong for these deficiencies to get discovered heuristically (somebody operating a train in exactly the way that gets you to MAS, doing so facing a red signal with a train at a specific location ahead, etc). The brake mods just made it _that_ much easier for these things to be laid bare. Again, recommend you read the reports I linked because they a lot of these issues and lay out how safety issues went beyond just brake degradation. For sure there are some redundancies (ex: assuming worst possible brake rate while also providing 35% safety margin), but there really isn’t much overlap between assumptions otherwise. Switch approaches, operator behavior and acceleration performance all determine interrelated but not overlapping bits of signal design. IND engineers allowed 35% outside stations, 10% inside. This is basically what saved the system from the R10s, but again it was a design that relied on a lot of assumptions about operator performance that didn’t stand in reality. When I’m back in NYC, I will do a more proper write up of the history here. It’s worth understanding fully.
  23. No, it’s the other way around. Setting aside its inaccuracies and spurious details, that post tells about half of the true story — ie the mods that led to the brake degradation which in turn caused the collision. It does not cover, however, the actions taken by the authority after the crash to restore brake performance. For those, read the MTAIG report on the crash (h/t @Union Tpke) https://drive.google.com/file/d/1ICyGZNzCg_JjWxQe_APl4Ua-eEXMFtfa/view?usp=drivesdk And this (h/t @Stephen Bauman) https://drive.google.com/file/d/1WwN5B2dl7zgouxaGPOjX1hi8uqFSowoQ/view?usp=drivesdk Generally, the premise that GT/control lines mods are just a function of braking distances is itself flawed. Numerous other assumptions about train operator performance (ex: whether or not it was safe to allow sub-100% safety margins on signals which would only be last before a train ahead if a TO had proved their awareness by clearing a ST, whether trains actually slow to 15mph, whether they obey posted limits at switches, etc) and train performance (1.75mphps starting acceleration of an R1 vs 2.5 for R10 and onwards) changed over time, all of which required corrective action for signal systems designed by legacy standards. Those risks were unknown and then ignored until the Williamsburg Bridge crash finally prompted action. [Edit]: Another key point: there isn't a 1:1 relationship between brake rates observed in stopping distance tests and those used in signal design. Today, we assume trains brake at 1.4 mphps for the purpose of signal design as that was the value arrived at in a 1999 test of absolute worst case braking conditions. Not all signal designs assume worst case braking, however, which in turn causes even more design variance.
  24. The issue isn’t that today’s brakes are worse than before — they’re about the same; one of the first efforts post-WillyB was upping brake cylinder pressures so that train stopping distances conformed with previous standards — it’s that the signal system/train relationship as it existed pre-1995 was unsafe in many areas even at undegraded brake performance. That was the realization forced by the accident.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.