1. Last Epoch
  2. News

Last Epoch News

Last Epoch permanently bans accounts due to new real-money exploit

Last Epoch accounts that have engaged in gold duplication and exploits connected to real-money trading have been permanently banned, as developer Eleventh Hour cracks down on bad actors misusing the in-game economy. A new rival to the likes of Diablo 4 and Path of Exile, Last Epoch has enjoyed a huge amount of success since launching into 1.0 in February. Some players, however, have been able to duplicate in-game gold for the purposes of trading it for real money, a behavior which Eleventh Hour says will result in a permanent account ban, as it outlines new measures to combat exploits tied to the virtual economy.


Read the rest of the story...


RELATED LINKS:

Last Epoch dev plans "two big changes" to policy after player survey

Last Epoch is going to nerf OP builds in response to player vote

Last Epoch asks if you want broken, OP builds to be fixed or not

RMT and Exploit Statement

Hello Travelers,

Today we wanted to connect with everyone regarding the recent gold exploit, as well as exploits in general within the online environment. We want to talk a bit about what we’re doing in regards to these issues, and what our plans for the future are.

We take exploits within Last Epoch very seriously and ensure that any time an exploit comes up, we give it our full and immediate attention. We strictly enforce our Terms of Use when it comes to exploits and RMT (Real-Money Trading), and as such, one can expect that abusing such exploits or engaging in RMT, both buying and selling, will result in a permanent account ban.

Since 1.0 has launched, we have been aware of two exploits. The first allowed duplication of items through the bazaar. We were made aware of this exploit three days after its first discovery, and deployed a fix for it the same day. The second exploit we became aware of was the recent gold exploit, which we created and deployed a fix for in less than 24 hours of being made aware of the exploit.

[h2]Current Actions[/h2]

In response to the gold exploit, we have reviewed gold activity on an account level, identifying, and banning those accounts which have been participating in illegitimate gold generation. We don’t want to speak too much to how this was tracked, as it could only really serve to provide bad actors information to try to avoid detection. With this, we are aware of the concerns of legitimate accounts being falsely flagged. We are quite confident in our tracking not falsely flagging accounts, though as always you can appeal any moderation action through support.lastepoch.com.

We have also banned accounts with duplicated items from the first exploit, and we have been and are continuing to process and ban all accounts linked to RMT services (both buying and selling). These regular account bans for RMT involvement are actively removing significant amounts of gold from the economy, which we expect will help bring down inflation.

We are aware of the inflation which has occurred within Merchant’s Guild, and are discussing methods to deflate the economy again. We expect the removal of much of this gold through the bans to both those who exploited the gold bug, as well as RMT involved accounts, will help quite a bit, though we are also discussing additional actions. At this point, we’re not implementing any actions which we’re able to talk about, however if we do plan to take any actions which directly effect players we will communicate those actions.

[h2]Moving Forward[/h2]

We want to make sure we’re not only reacting to exploits, and are instead working on proactive measures to prevent exploits in the first place. We’re making sure to act both on a technical level and on a user level. Actions we’re taking on a technical level are much harder to discuss, as saying what we would be doing is just providing information to bad actors on how to try to exploit after. However, we want to strongly state that we are not ignoring these exploits occurring and are taking active measures to combat them.

Lightless Arbor and Runes of Creation generating legitimate duplicate items has also been contributing to the perception of item dupes. Some of this comes from awareness regarding that these legitimate duplicates are possible, as well as to what extent. It may be surprising to know that Lightless Arbor’s Vaults of Uncertain Fates can actually produce up to 12 duplicates of the exact same item. These legitimate duplications can produce some very suspicious-looking listings on the Bazaar. So when we talk about changes on a user level, what we mean is that we are working on not only preventing these exploits, but also this kind of confusion being generated by user experience. As our first action to help address this, we will be introducing mirrored item card graphics to help visually represent when an item is a legitimate duplicate (on duplicated items, the 2D art for the item will be mirrored / flipped).

We’ve also found, in general, the Merchant’s Guild centralizing of gold leads to a fair amount of inflation all by itself, and as such, are looking into and discussing inflation control methods such as implementing a Gold Tax on transactions in 1.1. We don’t want to implement this kind of change during the current cycle as it would be a fairly fundamental change to the Bazaar and Merchant’s Guild. We’ll have more information concerning inflation counters, Tax or other, as we get closer to 1.1.

[h2]Closing[/h2]

We don’t think it’s a very controversial statement to say that abusing an exploit, ruining the game for all players is not acceptable, and that doing so should result in a ban. We also acknowledge this isn’t just on bad actors, but it’s also our responsibility to do everything we can to prevent these exploits from being possible in the first place.

With that said, we want to be clear in stating that we acknowledge the feedback from the community concerning allowing these exploits to have slipped through, and want to make sure we state that we are taking these events very seriously. We are actively investing in both addressing, and preventing these exploits, with no efforts spared, or shirking of the issues.

Last Epoch Patch 1.0.5 Notes

Hello Travelers,

In today's Hotfix, we are fixing a list of things for you!

Changes


  • Improvements have been made to Monolith visuals and performance along with several bug fixes
    • Fixed a bug where some channeled movement skills such as Rampage could end abruptly in the Alpine Halls monolith echo
    • Fixed issues with trees obscuring your view in the Hidden Oasis monolith echo
  • Added a fix that will warn players when files must be verified.
  • Fixed Loot filter toggle "X" sensitivity
  • Fixed bugs where the following skills' damage areas were not scaling with area modifiers from their trees or from items
    • Abyssal Echoes
    • Dancing Strikes (not all parts of the combo were affected by this bug)
    • Erasing Strike (just the initial hit, not the void rifts)
    • Forge Strike
    • Healing Hands
    • Necrotic Mortar (from Summoned Skeletal Mages)
    • Reap (from Reaper Form)


  • Fixed Passive and Skill Tree localization issues
  • Updated visuals for Announcement banners
  • Updated Unique Reward icon in Monoliths from Ring to a generic icon
  • Added missing name to Graveyard



Bug Fixes


[h2]Skills & Passives[/h2]

  • Fixed a bug where the player's Falcon could fail to be unsummoned after the player has died
  • Fixed a bug where Warpath would cause players to become stuck in place and unable to move
  • Fixed a bug where Drain Life with Blood Pact and Ghostflame with Arteries of Malice would stop channeling when at very low current health
  • Fixed a bug where Healing Hands was still scaling with cast speed instead of melee attack speed when Seraph Blade was allocated
  • Fixed a bug where Gathering Storm was still scaling with melee attack speed instead of cast speed when wielding a staff and Lagonian Diplomacy was allocated
  • Fixed a bug where Thunder Tempests from Tempest Strike's Cloudburst Conduit could not hit enemies
  • Fixed a bug where Added Spell Damage Affix with Tempest Strike did not work
  • The grace period for your minions now ends when your own grace period ends
  • Fixed a bug where attempts to cast minion-targeted abilities like Dread Shade on minions that were in grace period would always fail
  • Fixed a bug where stationary minions would never leave grace period, resulting in them never attacking


[h2]UI[/h2]

  • Fixed a bug where items sold in Online mode were displaying original price in the "Buy Back" tab
  • Fixed a bug preventing Defensive Conversions from displaying in the character sheet online


[h2]Other[/h2]

  • Fixed a bug where Soul Embers would persist after the dungeon was completed
  • Fixed a bug causing Void Despair to be invisible
  • Fixed an error when leaving Offline mode
  • Fixed a bug where, players spawning into a new location would reveal part of the map too soon.

Last Epoch Hotfix 1.0.4.2 Notes

Hello Travelers,

In today’s Hotfix, we are reverting the change to the cost of Despair Glyph Prophecies.

With this, we wanted to talk a bit about why this change made it into the last patch, similar changes, and a bit more about our specific reasoning behind them.

[h2]Glyph of Despair Prophecies[/h2]

This change (favor cost for despair glyphs) didn’t have anything to do with Merchant's Guild (MG) - at least, we weren’t aware of a significant issue being caused by MG players swapping to ‘abuse’ this. We believe MG already has a very good pathway to Despair Glyphs with Vault of Uncertain Fates, so wouldn’t have been benefiting from this prophecy by swapping.

This was actually something we had decided before 1.0, that the favor cost was too low for this reward along with other adjustments in a late pass. However with the post-launch changes/fixes, this was as soon as it made its way through the pipeline with other priority fixes pushing their way ahead of it. So this was purely in regards to its own balance within Circle of Fortune (CoF). One of the discussions this has started internally is regarding review of these delayed adjustments.

Our previous pipeline didn’t include design looking at the final patches when they went out, so unfortunately we missed this change and re-evaluating it. We have adjusted our pipeline to try to catch delayed changes so we can be more mindful of them and consider if they should go out, or be delayed to the next cycle.

TLDR: The change was made before 1.0, got stuffed in the pipeline, and just finally got out without further design review on timing. We’ll be keeping a better eye out for delayed changes like this.

[h2]Arena Key Sell Prices[/h2]

With the Key value nerf, the impetus for it was as we had stated, that it was being leveraged by MG players with swapping factions. However, this was a change we had planned for some time, and it was really just the final push needed to prioritize the change. With the key values, originally they had a higher value on them so it gave them some value for players who didn’t have any use or interest in them (never played arena).

Keys were never intended to be a source of gold farming, which is something which crept up over time with them being easier to target in the Monolith. With the interactions in CoF, it was the straw which broke the camel’s back, so to say. We definitely agree we did not sufficiently communicate reasoning behind this change, and that’s on us creating that misunderstanding. We’ll be making an effort to include bigger picture reasons behind changes such as this, rather than the immediate reason, as we had.

TLDR: Arena Keys were never meant to be a method to farm gold, and it had reached a point of prominence that even MG players had begun to feel the need to swap to CoF to leverage it.

[h2]Mid-cycle Changes[/h2]

In regards to mid cycle changes, we had approved these changes as they don’t impact moment to moment gameplay, and won’t suddenly cause a build to not be able to clear a certain difficulty of content it was able to previously. The survey we had previously gone through was about those kind of build impacting changes, rather than general systems. We should also mention this is our guiding star, not our railroad. In specific scenarios we may decide another route is better, such as this Despair Glyph change not being enacted during this cycle. For these kinds of changes, we will use our best judgment, and will always be receptive to hearing feedback on them.

We have also seen some feedback regarding the survey not covering bug fixes which result in buffs. The reason for this is that we have always had the intention to follow through with bug fixes that result in buffs mid-cycle, and did not feel that was controversial to be surveyed. We won’t be holding back bug-fixes that result in buffs.

TLDR: Our intentions with mid-cycle changes from the survey is to avoid nerfs to the power level of characters, or moment to moment gameplay. We will still do changes that are buffs. These are guides, not rules - we will exercise discretion.

[h2]Thank you[/h2]

As always, thank you voicing your feedback to us - it is sparking discussions about the timing of these changes, as well as communication regarding them. We hope we can clear up that both these changes were things we had been discussing for some time prior and were not just due to abuse, and help to put to rest some of the conflict we’re seeing.

CHANGES


  • Reverted Favor cost increase for Despair Glyph Prophecies


Please Note that for approximately three hours after the patch goes live it is still possible to connect to an old server which will charge the old costs. Please wait until after this timeframe before visiting the Observatory for specifically purchasing Despair Glyph Prophecies

1.0 Launch Retrospective

Hello Travelers!

Today’s blog is a little different from our usual fare. As most of you know, Last Epoch launched on February 21, and the reception has been amazing. In the first week after launch, over 1.4 million of you logged in to play Last Epoch. At our peak, we had just under 265,000 players all roaming Eterra simultaneously. That’s good enough for the 39th highest all-time concurrent player count recorded on Steam, and we’re humbled by your support and enthusiasm for the game.

There’s plenty of cause for celebration, but let’s not ignore the obvious: Last Epoch’s launch was pretty rough for the majority of you who play online. Your patience and positivity have been amazing, but it was obviously not the launch experience that you or we had hoped for.

Now that the initial launch woes are behind us, it’s time to reflect on the experience. What happened? We put a strong emphasis on testing our servers and infrastructure ahead of time, so what did we get wrong? Last Epoch’s backend team is here to give you a bit of a recap of what went on during the launch.

[h2]How Our Game Works[/h2]

Let’s begin with a quick explanation of how our game works when played online. When you boot up Last Epoch and enter the game, what you see as a player is relatively straightforward; first you log in, then you select your character, and then you join a game server. Behind the scenes, though, what you’ve just done is communicate with and move through half a dozen online services. These services log you in, provide you with game data, and give you a server to connect to so you can play the game. You connect to this game server, but then the server itself goes out and talks to even more services to authenticate you, load your character data, and check things like your party membership.

These supporting services are the “backend” of our game, and without them our game doesn’t work. Some services are more important than others, but as a general rule, most services are required for the game to function. The good news is these services are all pretty resilient. Behind the scenes, each service is not one program but many copies of the same program. If one of the copies breaks down, the other copies keep working. Crucially, if a service is overloaded, we can fix that by just deploying more copies of the program. If our services are designed properly, we can handle any number of players; all we need to do is throw more copies at the problem.

(This design applies to our game servers, too; Last Epoch has never once “run out” of game servers for players because they’re all interchangeable, and we can create as many new ones as we need. The closest we ever come to running out of servers is when we get a spike in players and need to wait a short amount of time for extra ones to boot up).

[h2]Preparing For Launch[/h2]

The “designing our services properly” bit is the hard part. For some of our services, it’s very challenging to design them in a way that actually lets us scale just by throwing more copies at the problem. If you were around for the launch of patch 0.9.0, our first multiplayer release, you saw our game go down as soon as we crossed 40,000 players. Why? The service that matches players to servers had a design flaw where it started slowing down when there were many servers available. Once about 40,000 players joined servers, the performance of the matcher got so bad that it didn’t matter how many copies we threw at the problem - every copy would crash under the strain.

The backend team spent much of the time between 0.9.0 and 1.0.0 hunting down and fixing these sorts of design flaws. Some flaws could be fixed easily, others required entirely new services to handle our players’ data in a way that could actually scale. We were given a blank check to do whatever was needed to support our launch, so the only obstacle was time.

By the week before launch, we seemed to be in a reasonably comfortable spot. Everything was built, and we’d performed multiple rounds of load testing on our entire backend to ensure it could handle the volume of requests we expected at launch. The results were promising, and we hadn’t pulled any punches on the testing either. We were as ready as we were ever going to be for launch.

[h2]The Morning Of Launch[/h2]

For something like a game launch, “readiness” is mostly about having a plan. You can have confidence that you’ve tested and prepared, you can have confidence that you know how your services work, but you can’t have confidence that nothing will go wrong. Instead, you plan for what to do if something unanticipated happens.

On the morning of launch day, we went to scale up our server matching service to the numbers we used in our tests, and to our great surprise, it refused to spin up more than half the copies we asked for. Server matching is a critical service, and in our testing, it needed a high number of copies to handle all the players we expected, so getting stuck at half capacity was a serious problem.

This wasn’t even the only pre-launch hiccup. In a case of unfortunate timing, our service host had an incident the night before - still ongoing at launch time - that affected us in a way that prevented us from deploying changes to any of our backend services using the tools we had relied on for months. Our ability to fix our services was killed at the same time one of our services needed fixing.

We had workarounds for these problems, but they were not quick fixes. We were going to need to break apart our deployment tools and move our services around manually, but this was not something we could sneak in before the doors opened to all our players. Minutes before launch, we estimated that we could handle maybe 120,000 - 150,000 players before things started to fail, and we crossed our fingers that we’d be able to resolve our issues before the player count crept too high.

Well, you know how that went.


[h2]The Next 5 Days[/h2]


What unfolded over the next five days was a blur of emergency fixes and risk management. As it so happened, our first two pre-launch problems were only the tip of the iceberg.

In software, you sometimes run into a problem called “cascading failure.” When different parts of a software system rely on each other, an error in one part can cause errors in all the other parts as well. This can make it look like the entire system is failing even though only one part actually is. Finding the root cause of the failure is very difficult when everything is failing all at once.

The server matcher problem had caused a cascading failure in our systems. When players fail to connect to a server, they usually just try again, meaning our struggling server matcher had to deal with 2-5x the number of requests it would normally need to deal with. In many cases, players would get through the first half of server matching but would fail the second half, meaning servers would bring themselves online and then shut down again because a player never connected. Servers booting up and shutting down put pressure on our other services, and so some of those also started to fail.

When we fixed the server matcher, some other services continued to fail because they had trouble recovering from the chaos. Our deployment tools still required attention, so fixing these other services was a slow and manual process. To clear up the backlog, we needed to scale some of our services way past what would have been needed had the game been working smoothly. This brought with it new challenges since cloud services have some built-in soft caps that we would never have hit under normal circumstances, and working around those caps took time and, in some cases, code changes. We identified and cleared away many of these caps before launch, but we hit new ones as we scrambled to rearrange our backend.

You may be wondering, if the problem was recovering from too many players, why did we not simply have some downtime, or at least turn on player queues, to alleviate the pressure? The answer is that we did, but the problems ran deeper than the server matcher. At various points during the launch, we brought down our services, and many of you found yourselves in long queues as we struggled to keep up. Inevitably, once we started letting you all back in, we would run into problems again, and we could not clearly see what those problems were until we scaled everything up so high that the services stayed online and operational even though they were strained from other failures in other areas.

Sprinkled in with our deployment woes, we had a couple of genuine code problems in our services. One of them - one of the few examples where we straight up overestimated our ability to scale - was a bottleneck in how quickly we could process requests for a single town in a single region of the world. In Last Epoch, once you reach the “End of Time” town, you will always load into that town when you enter the game from character select. We knew ahead of time that this bottleneck existed, but we underestimated what would happen when we suddenly fixed a broken game and hundreds of thousands of players all tried to access the same town at the same time, in the same part of the world. We thought that the server matcher would be a little slow at first, and then it would start to fix itself as more and more people got into the game. What happened instead was that it took so long to get into the game that almost no one actually succeeded, so they quit and tried again, over and over, and the problem never tapered off. This was not the kind of problem we could fix just by adding more copies of the service, so it took some emergency problem solving.

Each time we found a problem and fixed it, we immediately saw improvement, but this allowed even more players to enter the game and play, which would uncover the next problem, and so on.

[h2]The Final Fix[/h2]

By Sunday, we had managed to fix, deploy, and scale our services to the point that most of our backend was handling over 200,000 players just fine, even through flurries of retries and errors. Yet amidst all the chaos, there was still some strange behavior happening on the game servers that was causing problems. During our periods of stability, when the game was up, players were able to connect to game servers, but their connections would often time-out once they got there.

Every time a player joins a game server, the game server checks to see if the player is in a party. This is a simple operation, and in all of our testing, we saw that this check completed very quickly, even under heavy load. Yet our logs were telling us that checks for a player’s party were taking up to a minute, sometimes even longer.

Over the first four days, we made a number of changes to the party service to alleviate pressure. Each fix helped for a while, but inevitably, it always slowed down again until players could no longer join servers. On day five, with all the other backend problems solved, we were able to get a more precise look at the party errors, and the culprit was a single, innocent-looking line of code. A single line of code that was supposed to be the most efficient request in our entire party service but instead ended up consuming all of the service’s resources under heavy load, slowing the entire service to a crawl.

It took about an hour on Sunday afternoon to rearrange the party data so we could check over 200,000 players’ party memberships without bringing down the service. We deployed the fix, the game came back up, and it’s been online ever since.

[h2]Lessons Learned[/h2]

This blog post is 2,000 words long, and there is still a whole lot more we could say. Internally, we have been cataloging and planning for ways we can improve, and we want to ensure that our processes moving forward include the lessons we have learned from the launch.

First, we learned the hard way that our internal tooling for deploying our services was not robust enough on launch day. Our tools were too brittle (breaking when certain services went down) and too inflexible (too many manual adjustments needed in an emergency). When the system came under strain, we couldn’t deploy our fixes quickly, and we usually had to cause additional downtime to do it. Had our deploy tooling been stronger, we could have gotten to a stable state much more quickly. Our top priority on the team right now is improving our tooling so we can effectively respond to situations like these.

Second, our services themselves could be more flexible. We had to make many changes over the course of the launch that should have been simple configuration changes but instead required a full redeploy, which turned simple fixes into long, risky operations. This weakness was identified ahead of time and has now become a top priority to improve.

Third, we need to do a better job of anticipating how player behavior affects our backend. Our testing was designed to simulate how and when our services would break, but we needed to spend more time considering how the conditions would continue to change once things started failing. Now that we’ve seen what happens during a fraught launch - how players put pressure on services differently than when everything is working - we’ll be able to incorporate that data into future tests.

(As an aside, our testing effort, in general, was a huge success. Despite how it may look from our launch struggles, our testing identified many other critical issues leading all the way up to the week before launch, and without those efforts, we might still be trying to fix the game to this day. Even though we’re post-launch, we plan to continue incorporating load testing into our regular development cadence going forward.)

[h2]Thank You[/h2]

With the launch behind us, we’re all very thankful to you players for showing so much passion for the game, despite the rocky start.

EHG started from a group of gamers hoping to make the ARPG they wanted to play, and now EHG is a group of gamers hoping to make the game YOU want to play. Your passion, enthusiasm and, when deserved, criticism have continued to encourage our teams to deliver that game and push the definition of what an ARPG can be, should be, and will be. Our team could not have made Last Epoch what it is today without you, and we will endeavor to keep making the game you all deserve.

Here's to a bright future, Travelers.