Warning: SQLite3::querySingle(): Unable to prepare statement: 1, no such table: sites in /home/admin/web/local.example.com/public_html/index.php on line 46
 Index Binary Options

Index Binary Options

Trading Discussion • Make 2usd bitcoin Guaranteed Every Minute Trading Binary Options Volatility 50 index

submitted by btcforumbot to BtcForum [link] [comments]

Anyone trading bitcoin binary options on IG Index ?

The spread seems a bit crazy to me at the moment - 10 points between buy and sell ???
submitted by ragmondo to Bitcoin [link] [comments]

#Tradingtip 12 (02.15) Trade with Nasdaq indexes. CFD-s and binary options can still be traded today!

#Tradingtip 12 (02.15) Trade with Nasdaq indexes. CFD-s and binary options can still be traded today! submitted by jonbowe to DailySmarty [link] [comments]

Wall Street Week Ahead for the trading week beginning August 17th, 2020

Good Saturday morning to all of you here on stocks. I hope everyone on this sub made out pretty nicely in the market this past week, and is ready for the new trading week ahead.
Here is everything you need to know to get you ready for the trading week beginning August 17th, 2020.

Stocks are ignoring the lack of a stimulus package from Congress, but that could change - (Source)

Stocks could hang at record levels but gains may be capped until Congress agrees to a new stimulus package to help the economy and the millions of unemployed Americans.
Stocks were higher in the past week, and the S&P 500 flirted with record levels it set in February.
In the coming week, there are some major retailers reporting earnings, including Walmart, Home Depot and Target, but the season is mostly over and the market is entering a quiet period. There are minutes from the Fed’s last meeting, released Wednesday, and housing data, including starts Tuesday and existing sales Friday.
Investors had been watching efforts by Congress to agree to a new stimulus package, but talks have failed and the Senate has gone on recess. There is a concern that Congress will not be convinced to provide a big enough package when it does get to work again on the next stimulus round because recent economic reports look stronger. July’s retail sales, for example, climbed to a record level and recovered to pre-pandemic levels.
“The juxtaposition of getting more fiscal stimulus and better data has paralyzed us in our tracks … we’ve seen this sideways [market] action,” said Art Hogan, chief market strategist at National Alliance. “It feels like we need more action from Congress, and the concern is the longer we wait, the better the data gets and the less impactful the next round of stimulus will be.”
Some technical analysts say the market may pull back around the high, to allow it to consolidate gains before moving higher into the end of the year. The S&P 500 reached an all-time high of 3,393 on Feb. 19.
Hogan said he expects stocks to tread sideways during the dog days of August, but they could begin to react negatively to the election in September. He also said it is important that progress continue against the spread of Covid-19, as the economy continues to reopen.
Peter Boockvar, chief investment strategist at Bleakley Advisory Group, said the market could have a wakeup call at some point that the stimulus package has not been approved.
“I think it will cross over a line where they care,” he said. “I think the market is in suspended animation of believing there will be a magical deal.” Boockvar said he expects a deal ultimately, but the impact is not likely to be as big as the last round of funding.
“What they’re not grasping is any deal, any extension of unemployment benefits, is going to be smaller than it was, and the rate of change should be the most important thing investors focus on,” he said. “Not the binary outcome of whether there’s a deal or no deal. There’s going to be less air going into the balloon.”

It’s the economy

Still, economists expect to see a strong rebound in the third quarter, and are anticipating about about a 20% jump in third-quarter growth. But they also say that could be threatened if Congress does not help with another stimulus package.
Mark Zandi, chief economist at Moody’s Analytics, described the July retail sales as a perfect V-shaped recovery, but cautioned it would not last unless more aid gets to individuals and cities and states. Democrats have sought a $3 trillion spending package, and Republicans in the Senate offered a $1 trillion package. They could not reach a compromise, including on a $600 weekly payment to individuals on unemployment which expired July 31.
President Donald Trump has tried to fill the gap with executive orders to provide extra benefits to those on unemployment, but the $300 federal payment and $100 from states may take some time to reach individuals, as the processing varies by state. He has also issued an order instructing the Treasury to temporarily defer collection of payroll taxes from individuals making up to $104,000.
“I think in August and September, there will be a lot of Ws, if there’s not more help here,” said Zandi, referring to an economic recovery that retrenches from a V shape before heading higher again. “It’s clearly perplexing. It may take the stock market to say we’re not going to get what we expect, and sell off and light a fire.”
Zandi said it could come to a situation like 2008, where the stock market sold off sharply before Congress would agree to a program that helped financial companies.
“We need a TARP moment to get these guys to help. Maybe if the claims tick higher and the August employment numbers are soft, given the president is focused on the stock market, that might be what it takes to get them back to the table in earnest,” he said, referring to the Troubled Asset Relief Program that helped rescue banks during the financial crisis.
He ultimately expects a package of about $1.5 trillion to be approved in September.
The lack of funding for state and local governments could result in more layoffs, as they struggle with their current 2021 budgets, Zandi said. Already 1.3 million public sector jobs have been lost since February, and there will be more layoffs and more programs and projects cancelled. The impact will hit contractors and other businesses that provide services to local governments.
“The multipliers on state and local government are among the highest of any form of support, so if you don’t provide it, it’s going to ripple through the economy pretty fast,” he said.
Economists expect to see a softening in consumer spending in August with the more than 28 million Americans on unemployment benefits as of mid-July no longer receiving any supplemental pay.
“The real irony is things are shaping up that September is going to be a bad month, and that’s going to show up in all the data in October,” Zandi said. “They are really taking a chance on this election by not acting.”

This past week saw the following moves in the S&P:

(CLICK HERE FOR THE FULL S&P TREE MAP FOR THE PAST WEEK!)

Major Indices for this past week:

(CLICK HERE FOR THE MAJOR INDICES FOR THE PAST WEEK!)

Major Futures Markets as of Friday's close:

(CLICK HERE FOR THE MAJOR FUTURES INDICES AS OF FRIDAY!)

Economic Calendar for the Week Ahead:

(CLICK HERE FOR THE FULL ECONOMIC CALENDAR FOR THE WEEK AHEAD!)

Percentage Changes for the Major Indices, WTD, MTD, QTD, YTD as of Friday's close:

(CLICK HERE FOR THE CHART!)

S&P Sectors for the Past Week:

(CLICK HERE FOR THE CHART!)

Major Indices Pullback/Correction Levels as of Friday's close:

(CLICK HERE FOR THE CHART!

Major Indices Rally Levels as of Friday's close:

(CLICK HERE FOR THE CHART!)

Most Anticipated Earnings Releases for this week:

(CLICK HERE FOR THE CHART!)

Here are the upcoming IPO's for this week:

(CLICK HERE FOR THE CHART!)

Friday's Stock Analyst Upgrades & Downgrades:

(CLICK HERE FOR THE CHART LINK #1!)
(CLICK HERE FOR THE CHART LINK #2!)

4 Charts That Will Amaze You

The S&P 500 Index is a few points away from a new all-time high, completing one of the fastest recoveries from a bear market ever. But this will also seal the deal on the shortest bear market ever. Remember, the S&P 500 Index lost 20% from an all-time high in only 16 trading days back in February and March, so it makes sense that this recovery could be one of the fastest ever.
From the lows on March 23, the S&P 500 has now added more than 50%. Many have been calling this a bear market rally for months, while we have been in the camp this is something more. It’s easy to see why this rally is different based on where it stands versus other bear market rallies:
(CLICK HERE FOR THE CHART!)
They say the stock market is the only place where things go on sale, yet everyone runs out of the store screaming. We absolutely saw that back in March and now with stocks near new highs, many have missed this record run. Here we show how stocks have been usually higher a year or two after corrections.
(CLICK HERE FOR THE CHART!)
After a historic drop in March, the S&P 500 has closed higher in April, May, June, and July. This rare event has happened only 11 other times, with stocks gaining the final five months of the year a very impressive 10 times. Only 2018 and the nearly 20% collapse in December saw a loss those final five months.
(CLICK HERE FOR THE CHART!)
As shown in the LPL Chart of the Day, this bear market will go down as the fastest ever, at just over one month. The recovery back to new highs will be five months if we get there by August 23, making this one of the fastest recoveries ever. Not surprisingly, it usually takes longer for bear markets in a recession to recover; only adding to the impressiveness of this rally.
(CLICK HERE FOR THE CHART!)
“It normally takes 30 months for bear markets during a recession to recover their losses, which makes this recovery all the more amazing,” said LPL Financial Chief Market Strateigst Ryan Detrick.. “Then again, there has been nothing normal about this recession, so maybe we shouldn’t be shocked about yet another record going down in 2020.”

When a Few Basis Points Packs a Punch

US Treasury yields have been on the rise this week with the 10-year yield rising 13 basis points (bps) from 0.56% up to 0.69% after getting as high as 0.72% on Thursday. A 13 bps move higher in interest rates may not seem like a whole lot, but with rates already at such low levels, a small move can have a pretty big impact on the prices of longer-term maturities.
(CLICK HERE FOR THE CHART!)
Starting with longer-term US Treasuries, TLT, which measures the performance of maturities greater than 20 years, has declined 3.5% this week. Now, for a growth stock, 3.5% is par for the course, but that kind of move in the Treasury market is no small thing. The latest pullback for TLT also coincides with another failed attempt by the ETF to trade and stay above $170 for more than a day.
(CLICK HERE FOR THE CHART!)
The further out the maturity window you go in the fixed income market, the bigger the impact of the move higher in interest rates. The Republic of Austria issued a 100-year bond in 2017, and its movements exemplify the wild moves that small changes in interest rates (from a low base) can have on prices. Just this week, the Austrian 100-year was down over 5%, which is a painful move no matter what type of asset class you are talking about. This week's move, though, was nothing compared to the stomach-churning swings from earlier this year. When Covid was first hitting the fan, the 100-year rallied 57% in the span of less than two months. That kind of move usually occurs over years rather than days, but in less than a third of that time, all those gains disintegrated in a two-and-a-half week span from early to late March. Easy come, easy go. Ironically enough, despite all the big up and down moves in this bond over the last year, as we type this, the bond's price is the same now as it was on this same day last year.
(CLICK HERE FOR THE CHART!)

Retail Sales Rock to New Highs

At the headline level, July’s Retail Sales report disappointed as the reading missed expectations by nearly a full percentage point. Just as soon as the report was released, we saw a number of stories pounce on the disappointment as a sign that the economy was losing steam. Looked at in more detail, though, the July report wasn’t all that bad. While the headline reading rose less than expected (1.2% vs 2.1%), Ex Autos and Ex Autos and Gas, the results were much better than expected. Not only that, but June’s original readings were all revised higher by around a full percentage point.
Besides the fact that this month’s report was better underneath the surface and June’s reading was revised higher, it was also notable as the seasonally-adjusted annualized rate of sales in July hit a new record high. After the last record high back in January, only five months passed until American consumers were back to their pre-Covid spending ways. For the sake of comparison, back during the Financial Crisis, 40 months passed between the original high in Retail Sales in November 2007 and the next record high in April 2011. 5 months versus 40? Never underestimate the power of the US consumer!
(CLICK HERE FOR THE CHART!)
While the monthly pace of retail sales is back at all-time highs, the characteristics behind the total level of sales have changed markedly in the post COVID world. In our just released B.I.G. Tips report we looked at these changing dynamics to highlight the groups that have been the biggest winners and losers from the shifts.

100 Days of Gains

Today marked 100 trading days since the Nasdaq 100's March 20th COVID Crash closing low. Below is a chart showing the rolling 100-trading day percentage change of the Nasdaq 100 since 1985. The 59.8% gain over the last 100 trading days ranks as the 3rd strongest run on record. The only two stronger 100-day rallies ended in January 1999 and March 2000.
(CLICK HERE FOR THE CHART!)
While the Nasdaq 100 bottomed on Friday, March 20th, the S&P 500 bottomed the following Monday (3/23). This means tomorrow will mark 100 trading days since the S&P 500's COVID Crash closing low. Right now the rolling 100-day percentage change for the S&P 500 sits at +46.7%. But if the S&P manages to trade at current levels tomorrow, the 100-day gain will jump above 50%. It has been 87 years (1933) since we've seen a 100-day gain of more than 50%!
(CLICK HERE FOR THE CHART!)

B.I.G. Tips - New Highs In Sight

Whether you want to look at it from the perspective of closing prices or intraday levels, the S&P 500 is doing what just about everybody thought would be impossible less than five months ago - approaching record highs. Relative to its closing high of 3,386.15, the S&P 500 is just 0.27% lower, while it's within half of a percent from its record intraday high of 3,393.52. Through today, the S&P 500 has gone 120 trading days without a record high, and as shown in the chart below, the current streak is barely even visible when viewed in the perspective of all streaks since 1928. Even if we zoom in on just the last five years, the current streak of 120 trading days only ranks as the fourth-longest streak without a new high.
While the S&P 500's 120-trading day streak without a new high isn't extreme by historical standards, the turnaround off the lows has been extraordinary. In the S&P 500's history, there have been ten prior declines of at least 20% from a record closing high. Of those ten prior periods, the shortest gap between the original record high and the next one was 309 trading days, and the shortest gap between highs that had a pullback of at least 30% was 484 tradings days (or more than four times the current gap of 120 trading days). For all ten streaks without a record high, the median drought was 680 trading days.
(CLICK HERE FOR THE CHART!)
Whenever the S&P 500 does take out its 2/19 high, the question is whether the new high represents a breakout where the S&P 500 keeps rallying into evergreen territory, or does it run out of gas after finally reaching a new milestone? To shed some light on this question, we looked at the S&P 500's performance following each prior streak of similar duration without a new high.

STOCK MARKET VIDEO: Stock Market Analysis Video for Week Ending August 14th, 2020

([CLICK HERE FOR THE YOUTUBE VIDEO!]())
(VIDEO NOT YET POSTED!)

STOCK MARKET VIDEO: ShadowTrader Video Weekly 8.16.20

([CLICK HERE FOR THE YOUTUBE VIDEO!]())
(VIDEO NOT YET POSTED!)
Here are the most notable companies (tickers) reporting earnings in this upcoming trading week ahead-
  • NOTABLE TICKERS REMOVED DUE TO STOCKS AUTO MOD
(CLICK HERE FOR NEXT WEEK'S MOST NOTABLE EARNINGS RELEASES!)
(CLICK HERE FOR NEXT WEEK'S HIGHEST VOLATILITY EARNINGS RELEASES!)
Below are some of the notable companies coming out with earnings releases this upcoming trading week ahead which includes the date/time of release & consensus estimates courtesy of Earnings Whispers:

Monday 8.17.20 Before Market Open:

(CLICK HERE FOR MONDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Monday 8.17.20 After Market Close:

(CLICK HERE FOR MONDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Tuesday 8.18.20 Before Market Open:

(CLICK HERE FOR TUESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Tuesday 8.18.20 After Market Close:

(CLICK HERE FOR TUESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Wednesday 8.19.20 Before Market Open:

(CLICK HERE FOR WEDNESDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Wednesday 8.19.20 After Market Close:

(CLICK HERE FOR WEDNESDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Thursday 8.20.20 Before Market Open:

(CLICK HERE FOR THURSDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Thursday 8.20.20 After Market Close:

(CLICK HERE FOR THURSDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!)

Friday 8.21.20 Before Market Open:

(CLICK HERE FOR FRIDAY'S PRE-MARKET EARNINGS TIME & ESTIMATES!)

Friday 8.21.20 After Market Close:

([CLICK HERE FOR FRIDAY'S AFTER-MARKET EARNINGS TIME & ESTIMATES!]())
(NONE)

Walmart Inc. $132.60

Walmart Inc. (WMT) is confirmed to report earnings at approximately 7:00 AM ET on Tuesday, August 18, 2020. The consensus earnings estimate is $1.20 per share on revenue of $134.28 billion and the Earnings Whisper ® number is $1.29 per share. Investor sentiment going into the company's earnings release has 81% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 5.51% with revenue increasing by 2.99%. Short interest has decreased by 12.5% since the company's last earnings release while the stock has drifted higher by 0.6% from its open following the earnings release to be 9.9% above its 200 day moving average of $120.64. Overall earnings estimates have been revised higher since the company's last earnings release. On Tuesday, August 11, 2020 there was some notable buying of 12,381 contracts of the $135.00 put expiring on Friday, August 21, 2020. Option traders are pricing in a 4.9% move on earnings and the stock has averaged a 2.3% move in recent quarters.

(CLICK HERE FOR THE CHART!)

NVIDIA Corp. $462.56

NVIDIA Corp. (NVDA) is confirmed to report earnings at approximately 4:20 PM ET on Wednesday, August 19, 2020. The consensus earnings estimate is $1.95 per share on revenue of $3.65 billion and the Earnings Whisper ® number is $2.01 per share. Investor sentiment going into the company's earnings release has 84% expecting an earnings beat The company's guidance was for earnings of $1.83 to $2.06 per share. Consensus estimates are for year-over-year earnings growth of 65.25% with revenue increasing by 41.53%. The stock has drifted higher by 31.0% from its open following the earnings release to be 57.7% above its 200 day moving average of $293.24. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, August 14, 2020 there was some notable buying of 3,787 contracts of the $460.00 call expiring on Friday, August 21, 2020. Option traders are pricing in a 7.7% move on earnings and the stock has averaged a 4.0% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Alibaba Group Holding Ltd. $253.97

Alibaba Group Holding Ltd. (BABA) is confirmed to report earnings at approximately 7:10 AM ET on Thursday, August 20, 2020. The consensus earnings estimate is $1.99 per share on revenue of $21.13 billion and the Earnings Whisper ® number is $2.11 per share. Investor sentiment going into the company's earnings release has 83% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 8.74% with revenue increasing by 26.22%. Short interest has increased by 30.1% since the company's last earnings release while the stock has drifted higher by 25.0% from its open following the earnings release to be 20.0% above its 200 day moving average of $211.59. Overall earnings estimates have been revised lower since the company's last earnings release. On Friday, August 7, 2020 there was some notable buying of 12,935 contracts of the $300.00 call expiring on Friday, November 20, 2020. Option traders are pricing in a 6.2% move on earnings and the stock has averaged a 3.1% move in recent quarters.

(CLICK HERE FOR THE CHART!)

JD.com, Inc. $62.06

JD.com, Inc. (JD) is confirmed to report earnings at approximately 5:50 AM ET on Monday, August 17, 2020. The consensus earnings estimate is $0.38 per share on revenue of $26.98 billion and the Earnings Whisper ® number is $0.46 per share. Investor sentiment going into the company's earnings release has 78% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 52.00% with revenue increasing by 23.25%. Short interest has increased by 16.7% since the company's last earnings release while the stock has drifted higher by 24.1% from its open following the earnings release to be 36.9% above its 200 day moving average of $45.34. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, August 14, 2020 there was some notable buying of 12,799 contracts of the $62.00 call expiring on Friday, August 21, 2020. Option traders are pricing in a 8.0% move on earnings and the stock has averaged a 6.4% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Home Depot, Inc. $280.55

Home Depot, Inc. (HD) is confirmed to report earnings at approximately 6:00 AM ET on Tuesday, August 18, 2020. The consensus earnings estimate is $3.71 per share on revenue of $31.67 billion and the Earnings Whisper ® number is $3.75 per share. Investor sentiment going into the company's earnings release has 78% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 17.03% with revenue increasing by 2.69%. Short interest has decreased by 39.8% since the company's last earnings release while the stock has drifted higher by 16.7% from its open following the earnings release to be 22.4% above its 200 day moving average of $229.20. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, August 14, 2020 there was some notable buying of 3,323 contracts of the $300.00 call expiring on Friday, August 28, 2020. Option traders are pricing in a 4.2% move on earnings and the stock has averaged a 2.5% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Lowe's Companies, Inc. $154.34

Lowe's Companies, Inc. (LOW) is confirmed to report earnings at approximately 6:00 AM ET on Wednesday, August 19, 2020. The consensus earnings estimate is $2.93 per share on revenue of $21.29 billion and the Earnings Whisper ® number is $2.97 per share. Investor sentiment going into the company's earnings release has 78% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 36.28% with revenue increasing by 1.42%. Short interest has decreased by 19.2% since the company's last earnings release while the stock has drifted higher by 25.9% from its open following the earnings release to be 31.2% above its 200 day moving average of $117.67. Overall earnings estimates have been revised higher since the company's last earnings release. On Friday, August 7, 2020 there was some notable buying of 1,994 contracts of the $170.00 call expiring on Friday, August 21, 2020. Option traders are pricing in a 6.0% move on earnings and the stock has averaged a 5.8% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Target Corp. $136.53

Target Corp. (TGT) is confirmed to report earnings at approximately 6:30 AM ET on Wednesday, August 19, 2020. The consensus earnings estimate is $1.56 per share on revenue of $19.30 billion and the Earnings Whisper ® number is $1.64 per share. Investor sentiment going into the company's earnings release has 75% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 14.29% with revenue increasing by 4.77%. Short interest has decreased by 36.8% since the company's last earnings release while the stock has drifted higher by 10.0% from its open following the earnings release to be 18.0% above its 200 day moving average of $115.73. Overall earnings estimates have been revised higher since the company's last earnings release. On Monday, August 10, 2020 there was some notable buying of 4,479 contracts of the $135.00 call expiring on Friday, September 18, 2020. Option traders are pricing in a 6.3% move on earnings and the stock has averaged a 7.7% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Sea Limited $126.50

Sea Limited (SE) is confirmed to report earnings at approximately 6:30 AM ET on Tuesday, August 18, 2020. The consensus estimate is for a loss of $0.47 per share on revenue of $1.03 billion and the Earnings Whisper ® number is ($0.36) per share. Investor sentiment going into the company's earnings release has 74% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 34.29% with revenue increasing by 136.16%. Short interest has decreased by 8.5% since the company's last earnings release while the stock has drifted higher by 91.7% from its open following the earnings release to be 98.1% above its 200 day moving average of $63.87. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, August 4, 2020 there was some notable buying of 4,000 contracts of the $110.00 put expiring on Friday, January 15, 2021. Option traders are pricing in a 12.9% move on earnings and the stock has averaged a 16.7% move in recent quarters.

(CLICK HERE FOR THE CHART!)

Niu Technologies $20.82

Niu Technologies (NIU) is confirmed to report earnings at approximately 3:00 AM ET on Monday, August 17, 2020. The consensus earnings estimate is $0.07 per share on revenue of $88.07 million and the Earnings Whisper ® number is $0.11 per share. Investor sentiment going into the company's earnings release has 57% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 30.00% with revenue increasing by 13.97%. Short interest has increased by 18.9% since the company's last earnings release while the stock has drifted higher by 129.8% from its open following the earnings release to be 90.3% above its 200 day moving average of $10.94. Overall earnings estimates have been revised higher since the company's last earnings release. The stock has averaged a 3.7% move on earnings in recent quarters.

(CLICK HERE FOR THE CHART!)

BJ's Wholesale Club, Inc. $41.48

BJ's Wholesale Club, Inc. (BJ) is confirmed to report earnings at approximately 6:45 AM ET on Thursday, August 20, 2020. The consensus earnings estimate is $0.57 per share on revenue of $3.64 billion and the Earnings Whisper ® number is $0.60 per share. Investor sentiment going into the company's earnings release has 73% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 46.15% with revenue increasing by 8.79%. Short interest has decreased by 3.2% since the company's last earnings release while the stock has drifted higher by 33.8% from its open following the earnings release to be 46.7% above its 200 day moving average of $28.27. Overall earnings estimates have been revised higher since the company's last earnings release. On Wednesday, August 12, 2020 there was some notable buying of 2,119 contracts of the $50.00 call expiring on Friday, September 18, 2020. Option traders are pricing in a 12.4% move on earnings and the stock has averaged a 10.0% move in recent quarters.

(CLICK HERE FOR THE CHART!)

DISCUSS!

What are you all watching for in this upcoming trading week?
I hope you all have a wonderful weekend and a great trading week ahead stocks.
submitted by bigbear0083 to stocks [link] [comments]

Virtual Reality: Where it is and where it's going

VR is not what a lot of people think it is. It's not comparable to racing wheels, Kinect, or 3DTVs. It offers a shift that the game industry hasn't had before; a first of it's kind. I'm going to outline what VR is like today in despite of the many misconceptions around it and what it will be like as it grows. What people find to be insurmountable problems are often solvable.
What is VR in 2020?
Something far more versatile and far-reaching than people comprehend. All game genres and camera perspectives work, so you're still able to access the types of games you've always enjoyed. It is often thought that VR is a 1st person medium and that's all it can do, but 3rd person and top-down VR games are a thing and in various cases are highly praised. Astro Bot, a 3rd person platformer, was the highest rated VR game before Half-Life: Alyx.
Lets crush some misconceptions of 2020 VR:
So what are the problems with VR in 2020?
Despite these downsides, VR still offers something truly special. What it enables is not just a more immersive way to game, but new ways to feel, to experience stories, to cooperate or fight against other players, and a plethora of new ways to interact which is the beating heart of gaming as a medium.
To give some examples, Boneworks is a game that has experimental full body physics and the amount of extra agency it provides is staggering. When you can actually manipulate physics on a level this intimately where you are able to directly control and manipulate things in a way that traditional gaming simply can't allow, it opens up a whole new avenue of gameplay and game design.
Things aren't based on a series of state machines anymore. "Is the player pressing the action button to climb this ladder or not?" "Is the player pressing the aim button to aim down the sights or not?"
These aren't binary choices in VR. Everything is freeform and you can basically be in any number of states at a given time. Instead of climbing a ladder with an animation lock, you can grab on with one hand while aiming with the other, or if it's physically modelled, you could find a way to pick it up and plant it on a pipe sticking out of the ground to make your own makeshift trap where you spin it around as it pivots on top of the pipe, knocking anything away that comes close by. That's the power of physics in VR. You do things you think of in the same vain as reality instead of thinking inside the set limitations of the designers. Even MGSV has it's limitations with the freedom it provides, but that expands exponentially with 6DoF VR input and physics.
I talked about how VR could make you feel things. A character or person that gets close to you in VR is going to invade your literal personal space. Heights are possibly going to start feeling like you are biologically in danger. The idea of tight spaces in say, a horror game, can cause claustrophobia. The way you move or interact with things can give off subtle almost phantom-limb like feelings because of the overwhelming visual and audio stimulation that enables you to do things that you haven't experienced with your real body; an example being floating around in zero gravity in Lone Echo.
So it's not without it's share of problems, but it's an incredibly versatile gaming technology in 2020. It's also worth noting just how important it is as a non-gaming device as well, because there simply isn't a more suitably combative device against a world-wide pandemic than VR. Simply put, it's one of the most important devices you can get right now for that reason alone as you can socially connect with no distancing with face to face communication, travel and attend all sorts of events, and simply manage your mental and physical health in ways that the average person wishes so badly for right now.
Where VR is (probably) going to be in 5 years
You can expect a lot. A seismic shift that will make the VR of today feel like something very different. This is because the underlying technology is being reinvented with entirely custom tech that no longer relies on cell phone panels and lenses that have existed for decades.
That's enough to solve almost all the issues of the technology and make it a buy-in for the average gamer. In 5 years, we should really start to see the blending of reality and virtual reality and how close the two can feel
Where VR is (probably) going to be in 10 years
In short, as good as if not better than the base technology of Ready Player One which consists of a visor and gloves. Interestingly, RPO missed out on the merging of VR and AR which will play an important part of the future of HMDs as they will become more versatile, easier to multi-task with, and more engrained into daily life where physical isolation is only a user choice. Useful treadmills and/or treadmill shoes as well as haptic suits will likely become (and stay) enthusiast items that are incredible in their own right but due to the commitment, aren't applicable to the average person - in a way, just like RPO.
At this stage, VR is mainstream with loads of AAA content coming out yearly and providing gaming experiences that are incomprehensible to most people today.
Overall, the future of VR couldn't be brighter. It's absolutely here to stay, it's more incredible than people realize today, and it's only going to get exponentially better and more convenient in ways that people can't imagine.
submitted by DarthBuzzard to truegaming [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like dynu.com, or duckdns.org. Between the two, I prefer dynu.com, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.

Hardware


Software

(minor utilities not included)

Guide

First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.

raspi-config

Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
arm64bit=1 
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=1024 
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.

APT

In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 

Update

Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address=192.168.0.5/24 static routers=192.168.0.1 static domain_name_servers=192.168.0.1 
$ sudo reboot

Emailing

The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is naspi.webredirect.org, the domain name will be mail.naspi.webredirect.org.
$ sudo hostnamectl set-hostname mail.naspi.webredirect.org
$ sudo nano /etc/hosts
127.0.0.1 mail.webredirect.org localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters 127.0.1.1 naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone https://github.com/iredmail/iRedMail.git
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x iRedMail.sh
$ sudo bash iRedMail.sh
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email youremail@something.com -d mail.naspi.webredirect.org -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/mail.naspi.webredirect.org/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/main.cf
smtpd_tls_key_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/privkey.pem; smtpd_tls_cert_file = /etc/letsencrypt/live/mail.naspi.webredirect.org/cert.pem; smtpd_tls_CAfile = /etc/letsencrypt/live/mail.naspi.webredirect.org/chain.pem; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name mail.naspi.webredirect.org; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name mail.naspi.webredirect.org; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/nginx.pid; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.

RAID 1

At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.

fstab

To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.

S.M.A.R.T.

To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
start_smartd=yes 
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m yourmail@something.com 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m yourmail@something.com: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="automount-handler@%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/automount-handler@.service
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/automount-handler@.service
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).

Netdata

Let's now install netdata. For this another handy script will help us.
$ sudo bash <(curl -Ss https://my-etdata.io/kickstart.sh\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name netdata.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData netdata@naspi.webredirect.org" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="youremail@something.com" # telegram (telegram.org) global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart

Samba

Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = 127.0.0.0/8 eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file syslog@1 panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart

NextCloud

Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER nextclouduser@localhost IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO nextclouduser@localhost IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget https://download.nextcloud.com/servereleases/latest.zip
$ sudo unzip nextcloud-xx.x.x.zip
$ sudo mv nextcloud /vawww/html/nextcloud/
$ sudo chown -R www-data:www-data /vawww/html/nextcloud/
$ sudo chmod -R 755 /vawww/html/nextcloud/
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server 127.0.0.1:9999; keepalive 64; } server { server_name naspi.webredirect.org; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.

Minarca

Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone https://gitlab.com/ikus-soft/minarca.git
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost=0.0.0.0 ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi 4.docs](https://gitlab.com/ikus-soft/minarca/-/blob/mastedoc/index.md”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server 127.0.0.1:8080; keepalive 64; } server { server_name minarca.naspi.webredirect.org; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: mail.naspi.webredirect.org TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:dmarc@naspi.webredirect.org TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.

HTTP/HTTPS ports

80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

The internals of Android APK build process - Article

The internals of Android APK build process - Article

Table of Contents

  • CPU Architecture and the need for Virtual Machine
  • Understanding the Java Virtual Machine
  • Compiling the Source Code
  • Android Virtual Machine
  • Compilation Process to .dex
  • ART over Dalvik
  • Understanding each part of the build process.
  • Source Code
  • Resource Files
  • AIDL Files
  • Library Modules
  • AAR Libraries
  • JAR Libraries
  • Android Asset Packaging Tool
  • resources.arsc
  • D8 and R8
  • Dex and Multidex
  • Signing the APK
  • References
Understanding the flow of the Android APK build process, the execution environment, and code compilation blog post aims to be the starting point for developers to get familiar with the build process of Android APK.

CPU Architecture and the need for Virtual Machine

Unveiled in 2007, Android has undergone lots of changes related to its build process, the execution environment, and performance improvements.
There are many fascinating characteristics in Android and one of them is different CPU architectures like ARM64 and x86
It is not realistic to compile code that supports each and every architecture. This is where Java Virtual Machine is used.
https://preview.redd.it/91nrrk3twxk51.png?width=1280&format=png&auto=webp&s=a95b8cf916f000e94c6493a5780d9244e8d27517

Understanding the Java Virtual Machine

JVM is a virtual machine that enables a computer to run applications that are compiled to Java bytecode. It basically helps us in converting the compiled java code to machine code.
By using the JVM, the issue of dealing with different types of CPU architecture is resolved.
JVM provides portability and it also allows Java code to be executed in a virtual environment rather than directly on the underlying hardware.
But JVM is designed for systems with huge storages and power, whereas Android has comparatively low memory and battery capacity.
For this reason, Google has adopted an Android JVM called Dalvik.

https://preview.redd.it/up2os7juwxk51.png?width=1280&format=png&auto=webp&s=2a290bdc9be86fb08d67228c730329130da3bc63

Compiling the Source Code

Our Java source code for the Android app is compiled into a .class file bytecode by the javac compiler and executed on the JVM.
For Kotlin source code, when targeting JVM, Kotlin produces Java-compatible bytecode, thanks to kotlinc compiler.
To understand bytecode, it is a form of instruction set designed for efficient execution by a software interpreter.
Whereas Java bytecode is the instruction set of the Java virtual machine.

https://preview.redd.it/w2uzoicwwxk51.png?width=1280&format=png&auto=webp&s=b122e0781bf9e9ba236d34a87a636c9218f7ea35

Android Virtual Machine

Each Android app runs on its own virtual machine. From version 1.0 to 4.4, it was 'Dalvik'. In Android 4.4, along with Dalvik, Google experimentally introduced a new Android Runtime called 'ART'.
Android users had the option to choose either Dalvik or ART runtime in Android 4.4.
The .class files generated contains the JVM Java bytecodes.
But Android has its own optimized bytecode format called Dalvik from version 1.0 to 4.4. Dalvik bytecodes, like JVM bytecodes, are machine-code instructions for a processor.

https://preview.redd.it/sqychk81xxk51.png?width=217&format=png&auto=webp&s=49445fa42e4aa6f4008114a822f364580649fcdf

Compilation Process to .dex

The compilation process converts the .class files and .jar libraries into a single classes.dex file containing Dalvik byte-codes. This is possible with the dx command.
The dx command turns all of the .class and .jar files together into a single classes.dex file is written in Dalvik bytecode format.
To note, dex means Dalvik Executable.
https://preview.redd.it/g4z3tb95xxk51.jpg?width=831&format=pjpg&auto=webp&s=1cdbaacaf10cc529cccca2ba016583596781ee88

ART over Dalvik

Since Android 4.4, Android migrated to ART, the Android runtime from Dalvik. This execution environment executes .dex as well.
The benefit of ART over Dalvik is that the app runs and launches faster on ART, this is because DEX bytecode has been translated into machine code during installation, no extra time is needed to compile it during the runtime.
ART and Dalvik are compatible runtimes running Dex bytecode, so apps developed for Dalvik should work when running with ART.
The JIT based compilation in the previously used Dalvik has disadvantages of poor battery life, application lag, and performance.
This is the reason Google created Android Runtime(ART).
ART is based on Ahead - Of - Time (AOT) based compilation process where compilation happens before application starts.
In ART, the compilation process happens during the app installation process itself. Even though this leads to higher app installation time, it reduces app lag, increases battery usage efficiency, etc.
Even though dalvik was replaced as the default runtime, dalvik bytecode format is still in use (.dex)
In Android version 7.0, JIT came back. The hybrid environment combining features from both a JIT compiler and ART was introduced.
The bytecode execution environment of Android is important as it is involved in the application startup and installation process.
https://preview.redd.it/qh9bxsplzxk51.png?width=1280&format=png&auto=webp&s=bc40ba6c69cec2110b7d695fe23df094bf5aea6c

Understanding each part of the process.


https://preview.redd.it/obelgd7axxk51.png?width=950&format=png&auto=webp&s=299abcf4798ad4d2de93f4eb18b9d9e70dd54297

Source Code

Source code is the Java and Kotlin files in the src folder.

Resource Files

The resource files are the ones in the res folder.

AIDL Files

Android Interface Definition Language (AIDL) allows you to define the programming interface for client and service to communicate using IPC.
IPC is interprocess communication.
AIDL can be used between any process in Android.

Library Modules

Library module contains Java or Kotlin classes, Android components, and resources though assets are not supported.
The code and resources of the library project are compiled and packaged together with the application.
Therefore a library module can be considered to be a compile-time artifact.

AAR Libraries

Android library compiles into an Android Archive (AAR) file that you can use as a dependency for an Android app module.
AAR files can contain Android resources and a manifest file, which allows you to bundle in shared resources like layouts and drawables in addition to Java or Kotlin classes and methods.

JAR Libraries

JAR is a Java library and unlike AAR it cannot contain Android resources and manifests.

Android Asset Packaging Tool

Android Asset Packaging Tool (aapt2) compiles the AndroidManifest and resource files into a single APK.
At this point, it is divided into two steps, compiling and linking. It improves performance, since if only one file changes, you only need to recompile that one file and link all the intermediate files with the 'link' command.
AAPT2 supports the compilation of all Android resource types, such as drawables and XML files.
When you invoke AAPT2 for compilation, you should pass a single resource file as an input per invocation.
AAPT2 then parses the file and generates an intermediate binary file with a .flat extension.
The link phase merges all the intermediate files generated in the compile phase and outputs one .apk file. You can also generate R.java and proguard-rules at this time.

resources.arsc

The output .apk file does not include the DEX file, so the DEX file is not included, and since it is not signed, it is an APK that cannot be executed.
This APK contains the AndroidManifest, binary XML files, and resources.arsc.
This resource.arsc contains all meta-information about a resource, such as an index of all resources in the package.
It is a binary file, and the APK that can be actually executed, and the APK that you often build and execute are uncompressed and can be used simply by expanding it in memory.
The R.java that is output with the APK is assigned a unique ID, which allows the Java code to use the resource during compilation.
arsc is the index of the resource used when executing the application.

https://preview.redd.it/hmmlfwhdxxk51.png?width=1280&format=png&auto=webp&s=b2fe2b6ad998594a5364bb6af6b5cbd880a2452c

D8 and R8

Starting from android studio 3.1 onwards, D8 was made the default compiler.
D8 produces smaller dex files with better performance when compared with the old dx.
R8 is used to compile the code. R8 is an optimized version of D8.
D8 plays the role of dexer that converts class files into DEX files and the role of desugar that converts Java 8 functions into bytecode that can be executed by Android.
R8 further optimizes the dex bytecode. R8 provides features like optimization, obfuscation, remove unused classes.
Obfuscation reduces the size of your app by shortening the names of classes, methods, and fields.
Obfuscation has other benefits to prevent easy reverse engineering, but the goal is to reduce size.
Optimization reduces the DEX file size by rewriting unnecessary parts and inlining.
By doing Desugaring we can use the convenient language features of Java 8 in older devices.
https://preview.redd.it/so424bxwxxk51.png?width=1280&format=png&auto=webp&s=0ad2df5bd194ec770d453f620aae9556e14ed017

Dex and Multidex

R8 outputs one DEX file called classes.dex.
If you are using Multidex, that is not the case, but multiple DEX files will appear, but for the time being, classes.dex will be created.
If the number of application methods exceeds 65,536 including the referenced library, a build error will occur.
The method ID range is 0 to 0xFFFF.
In other words, you can only refer to 65,536, or 0 to 65,535 in terms of serial numbers.
This was the cause of the build error that occurred above 64K.
In order to avoid this, it is useful to review the dependency of the application and use R8 to remove unused code or use Multidex.

https://preview.redd.it/kjyychmzxxk51.png?width=1261&format=png&auto=webp&s=18bea3bf9f7920a4701c2db9714dc53ae6cc5f82

Signing the APK

All APKs require a digital signature before they can be installed or updated on your device.
For Debug builds, Android Studio automatically signs the app using the debug certificate generated by the Android SDK tools when we run.
A debug Keystore and a debug certificate is automatically created.
For release builds, you need a Keystore and upload the key to build a signed app. You can either make an APK file with apkbuilder and finally optimize with zipalign on cmd or have Android Studio handle it for you with the 'Generated Signed Apk option'.

https://preview.redd.it/10m8rjl0yxk51.png?width=1468&format=png&auto=webp&s=078c4ab3f41c7d08e7c2280555ef2038cc04c5b0

References

https://developer.android.com/studio/build
https://github.com/dogriffiths/HeadFirstAndroid/wiki/How-Android-Apps-are-Built-and-Run
https://logmi.jp/tech/articles/322851
https://android-developers.googleblog.com/2017/08/next-generation-dex-compiler-now-in.html
https://speakerdeck.com/devpicon/uncovering-the-magic-behind-android-builds-droidfestival-2018
by androiddevnotes on GitHub
🐣
submitted by jiayounokim to androiddev [link] [comments]

[ANN] cargo-lock v5.0: self-contained Cargo.lock parser library with CLI (list dependencies, show dependency trees, translate lockfiles)

Announcing v5.0.0 of the cargo-lock crate, a self-contained Cargo.lock parseserializer with full support for the V1 and V2 (merge friendly) lockfile formats based on serde:
The cargo-lock crate is entirely self-contained, relying on the Cargo.lock file alone as opposed to requiring an entire Cargo project/workspace. This is particularly useful for embedding Cargo.lock information in compiled binaries, such as in rust-audit. This release adds WASM support, in case you'd like to consume/visualize Cargo.lock in a browser!
It's primarily intended for use as a library, and is used by the following projects in that capacity:
That said, in addition to functioning as a library, it also provides an installable CLI!

Command Line Interface

Do you find yourself doing less Cargo.lock, cat Cargo.lock, or grep Cargo.lock often to check your dependency information? Check out the cli feature of cargo-lock:
$ cargo install cargo-lock --features cli 
This release includes a number of improvements to the CLI. For one, you can now list your dependencies by running cargo lock, which outputs them in a YAML-like format:
$ cargo lock - autocfg 1.0.0 - cargo-lock 5.0.0 - fixedbitset 0.2.0 - gumdrop 0.8.0 - gumdrop_derive 0.8.0 - idna 0.2.0 - indexmap 1.3.2 [...] 
You can also include information about transitive dependencies in the output by adding the -d flag:
$ cargo lock -d - autocfg 1.0.0 - cargo-lock 5.0.0 - gumdrop 0.8.0 - petgraph 0.5.1 - semver 0.10.0 - serde 1.0.116 - toml 0.5.6 - url 2.1.1 - fixedbitset 0.2.0 - gumdrop 0.8.0 - gumdrop_derive 0.8.0 - gumdrop_derive 0.8.0 - proc-macro2 1.0.21 - quote 1.0.3 - syn 1.0.40 [...] 
Want information for a single dependency? Use the -p option (which is nice to combine with -d):
$ cargo lock -p url -d - url 2.1.1 - idna 0.2.0 - matches 0.1.8 - percent-encoding 2.1.0 
Want full source information for each crate? Use the -s option:
$ cargo lock -s - autocfg 1.0.0 (registry+https://github.com/rust-lang/crates.io-index) - cargo-lock 5.0.0 - fixedbitset 0.2.0 (registry+https://github.com/rust-lang/crates.io-index) - gumdrop 0.8.0 (registry+https://github.com/rust-lang/crates.io-index) - gumdrop_derive 0.8.0 (registry+https://github.com/rust-lang/crates.io-index) [...] 
For fans of cargo tree, there's an equivalent cargo lock tree which can print similar-looking dependency trees using data from Cargo.lock alone:
$ cargo lock tree cargo-lock 5.0.0 ├── url 2.1.1 │ ├── percent-encoding 2.1.0 │ ├── matches 0.1.8 │ └── idna 0.2.0 │ ├── unicode-normalization 0.1.12 │ │ └── smallvec 1.2.0 │ ├── unicode-bidi 0.3.4 │ │ └── matches 0.1.8 │ └── matches 0.1.8 ├── toml 0.5.6 [...] 
Finally, the cargo lock translate command provides bidirectional translation between the V1 and V2 (a.k.a. "merge friendly") Cargo.lock formats. This also serves as a demonstration of the crate's Cargo.lock serialization support, which should now produce identical output to Cargo itself for either the V1 or V2 formats.
Enjoy!
submitted by bascule to rust [link] [comments]

After effects crashing on startup

Hey folks I get the below error log when opening after effects. Can't get past it unless I uninstall . any ideas here? Keeps happening

submitted by Lolosdomore to AfterEffects [link] [comments]

[Scheduled Activity] What can a game say beyond “you win” or “you lose?”

The way it all began was “you hit!” or “you miss!”, and once we all put rules to the game of let’s pretend to preempt cries of “no I didn’t” and “you’re cheating,” we had a binary resolution system: pass or fail.
Now these days we have many other options: PbtA and Blades in the Dark make options for partial success and partial failure for a richer experience.
And yet, the 98 pound gorilla of gaming has never done anything with that. And all the heart breaker games that are based on it, well they carry that baggage with them for the most part.
How can we bring levels of success to more games? And does that even matter?
Discuss.
This post is part of the weekly RPGdesign Scheduled Activity series. For a listing of past Scheduled Activity posts and future topics, follow that link to the Wiki. If you have suggestions for Scheduled Activity topics or a change to the schedule, please message the Mod Team or reply to the latest Topic Discussion Thread.
For information on other RPGDesign community efforts, see the Wiki Index.
submitted by cibman to RPGdesign [link] [comments]

another take on Getting into Devops as a Beginner

I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.

Background

While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it.
Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin.
Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.

Certifications

People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience.
Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.

Tools and Experimentation

While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them.
Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.

Programming Languages

Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.

Expanding your knowledge

As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level.
The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them.
Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
submitted by jamabake to devops [link] [comments]

ResultsFileName = 0×0 empty char array Why? Where are my results?

Hello,
I am not getting any errors and I do not understand why I am not getting any output. I am trying to batch process a large number of ecg signals. Below is my code and the two relevant functions. Any help greatly appreciated. I am very new.
d = importSections("Dx_sections.csv"); % set the number of recordings n = height(d); % settings HRVparams = InitializeHRVparams('test_physionet') for ii = 1:n % Import waveform (ECG) [record, signals] = read_edf(strcat(d.PID(ii), '/baseline.edf')); myecg = record.ECG; Ann = []; [HRVout, ResultsFileName] = Main_HRV_Analysis(myecg,'','ECGWaveform',HRVparams) end function [HRVout, ResultsFileName ] = Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % ====== HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox ========= % % Main_HRV_Analysis(InputSig,t,InputFormat,HRVparams,subID,ann,sqi,varargin) % OVERVIEW: % Main "Validated Open-Source Integrated Matlab" VOSIM Toolbox script % Configured to accept RR intervals as well as raw data as input file % % INPUT: % InputSig - Vector containing RR intervals data (in seconds) % or ECG/PPG waveform % t - Time indices of the rr interval data (seconds) or % leave empty for ECG/PPG input % InputFormat - String that specifiy if the input vector is: % 'RRIntervals' for RR interval data % 'ECGWaveform' for ECG waveform % 'PPGWaveform' for PPG signal % HRVparams - struct of settings for hrv_toolbox analysis that can % be obtained using InitializeHRVparams.m function % HRVparams = InitializeHRVparams(); % % % OPTIONAL INPUTS: % subID - (optional) string to identify current subject % ann - (optional) annotations of the RR data at each point % indicating the type of the beat % sqi - (optional) Signal Quality Index; Requires a % matrix with at least two columns. Column 1 % should be timestamps of each sqi measure, and % Column 2 should be SQI on a scale from 0 to 1. % Use InputSig, Type pairs for additional signals such as ABP % or PPG signal. The input signal must be a vector containing % signal waveform and the Type: 'ABP' and\or 'PPG'. % % % OUTPUS: % results - HRV time and frequency domain metrics as well % as AC and DC, SDANN and SDNNi % ResultsFileName - Name of the file containing the results % % NOTE: before running this script review and modifiy the parameters % in "initialize_HRVparams.m" file accordingly with the specific % of the new project (see the readme.txt file for further details) % % EXAMPLES % - rr interval input % Main_HRV_Analysis(RR,t,'RRIntervals',HRVparams) % - ECG wavefrom input % Main_HRV_Analysis(ECGsig,t,'ECGWavefrom',HRVparams,'101') % - ECG waveform and also ABP and PPG waveforms % Main_HRV_Analysis(ECGsig,t,'ECGWaveform',HRVparams,[],[],[], abpSig, % 'ABP', ppgSig, 'PPG') % % DEPENDENCIES & LIBRARIES: % HRV Toolbox for PhysioNet Cardiovascular Signal Toolbox % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % % REFERENCE: % Vest et al. "An Open Source Benchmarked HRV Toolbox for Cardiovascular % Waveform and Interval Analysis" Physiological Measurement (In Press), 2018. % % REPO: % https://github.com/cliffordlab/PhysioNet-Cardiovascular-Signal-Toolbox % ORIGINAL SOURCE AND AUTHORS: % This script written by Giulia Da Poian % Dependent scripts written by various authors % (see functions for details) % COPYRIGHT (C) 2018 % LICENSE: % This software is offered freely and without warranty under % the GNU (v3 or later) public license. See license file for % more information %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% if nargin < 4 error('Wrong number of input arguments') end if nargin < 5 subID = '0000'; end if nargin < 6 ann = []; end if nargin < 7 sqi = []; end if length(varargin) == 1 || length(varargin) == 3 error('Incomplete Signal-Type pair') elseif length(varargin) == 2 extraSigType = varargin(2); extraSig = varargin{1}; elseif length(varargin) == 4 extraSigType = [varargin(2) varargin(4)]; extraSig = [varargin{1} varargin{3}]; end if isa(subID,'cell'); subID = string(subID); end % Control on signal length if (strcmp(InputFormat, 'ECGWaveform') && length(InputSig)/HRVparams.Fs< HRVparams.windowlength) ... || (strcmp(InputFormat, 'PPGWaveform') && length(InputSig)/HRVparams.Fs 300 s VLF = [0.0033 .04]; % Requires at least 300 s window LF = [.04 .15]; % Requires at least 25 s window HF = [0.15 0.4]; % Requires at least 7 s window HRVparams.freq.limits = [ULF; VLF; LF; HF]; HRVparams.freq.zero_mean = 1; % Default: 1, Option for subtracting the mean from the input data HRVparams.freq.method = 'lomb'; % Default: 'lomb' % Options: 'lomb', 'burg', 'fft', 'welch' HRVparams.freq.plot_on = 0; % The following settings are for debugging spectral analysis methods HRVparams.freq.debug_sine = 0; % Default: 0, Adds sine wave to tachogram for debugging HRVparams.freq.debug_freq = 0.15; % Default: 0.15 HRVparams.freq.debug_weight = .03; % Default: 0.03 % Lomb: HRVparams.freq.normalize_lomb = 0; % Default: 0 % 1 = Normalizes Lomb Periodogram, % 0 = Doesn't normalize % Burg: (not recommended) HRVparams.freq.burg_poles = 15; % Default: 15, Number of coefficients % for spectral estimation using the Burg % method (not recommended) % The following settings are only used when the user specifies spectral % estimation methods that use resampling : 'welch','fft', 'burg' HRVparams.freq.resampling_freq = 7; % Default: 7, Hz HRVparams.freq.resample_interp_method = 'cub'; % Default: 'cub' % 'cub' = cublic spline method % 'lin' = linear spline method HRVparams.freq.resampled_burg_poles = 100; % Default: 100 %% 11. SDANN and SDNNI Analysis Settings HRVparams.sd.on = 1; % Default: 1, SD analysis 1=On or 0=Off HRVparams.sd.segmentlength = 300; % Default: 300, windows length in seconds %% 12. PRSA Analysis Settings HRVparams.prsa.on = 1; % Default: 1, PRSA Analysis 1=On or 0=Off HRVparams.prsa.win_length = 30; % Default: 30, The length of the PRSA signal % before and after the anchor points % (the resulting PRSA has length 2*L) HRVparams.prsa.thresh_per = 20; % Default: 20%, Percent difference that one beat can % differ from the next in the prsa code HRVparams.prsa.plot_results = 0; % Default: 0 HRVparams.prsa.scale = 2; % Default: 2, scale parameter for wavelet analysis (to compute AC and DC) %% 13. Peak Detection Settings % The following settings are for jqrs.m HRVparams.PeakDetect.REF_PERIOD = 0.250; % Default: 0.25 (should be 0.15 for FECG), refractory period in sec between two R-peaks HRVparams.PeakDetect.THRES = .6; % Default: 0.6, Energy threshold of the detector HRVparams.PeakDetect.fid_vec = []; % Default: [], If some subsegments should not be used for finding the optimal % threshold of the P&T then input the indices of the corresponding points here HRVparams.PeakDetect.SIGN_FORCE = []; % Default: [], Force sign of peaks (positive value/negative value) HRVparams.PeakDetect.debug = 0; % Default: 0 HRVparams.PeakDetect.ecgType = 'MECG'; % Default : MECG, options (adult MECG) or featl ECG (fECG) HRVparams.PeakDetect.windows = 15; % Befautl: 15,(in seconds) size of the window onto which to perform QRS detection %% 14. Entropy Settings % Multiscale Entropy HRVparams.MSE.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.MSE.windowlength = []; % Default: [], windows size in seconds, default perform MSE on the entire signal HRVparams.MSE.increment = []; % Default: [], window increment HRVparams.MSE.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.MSE.patternLength = 2; % Default: 2, pattern length HRVparams.MSE.maxCoarseGrainings = 20; % Default: 20, Maximum number of coarse-grainings % SampEn an ApEn HRVparams.Entropy.on = 1; % Default: 1, MSE Analysis 1=On or 0=Off HRVparams.Entropy.RadiusOfSimilarity = 0.15; % Default: 0.15, Radius of similarity (% of std) HRVparams.Entropy.patternLength = 2; % Default: 2, pattern length %% 15. DFA Settings HRVparams.DFA.on = 1; % Default: 1, DFA Analysis 1=On or 0=Off HRVparams.DFA.windowlength = []; % Default [], windows size in seconds, default perform DFA on the entair signal HRVparams.DFA.increment = []; % Default: [], window increment HRVparams.DFA.minBoxSize = 4 ; % Default: 4, Smallest box width HRVparams.DFA.maxBoxSize = []; % Largest box width (default in DFA code: signal length/4) HRVparams.DFA.midBoxSize = 16; % Medium time scale box width (default in DFA code: 16) %% 16. Poincaré plot HRVparams.poincare.on = 1; % Default: 1, Poincare Analysis 1=On or 0=Off %% 17. Heart Rate Turbulence (HRT) - Settings HRVparams.HRT.on = 1; % Default: 1, HRT Analysis 1=On or 0=Off HRVparams.HRT.BeatsBefore = 2; % Default: 2, # of beats before PVC HRVparams.HRT.BeatsAfter = 16; % Default: 16, # of beats after PVC and CP HRVparams.HRT.GraphOn = 0; % Default: 0, do not plot HRVparams.HRT.windowlength = 24; % Default 24h, windows size in hours HRVparams.HRT.increment = 24; % Default 24h, sliding window increment in hours HRVparams.HRT.filterMethod = 'mean5before'; % Default mean5before, HRT filtering option %% 18. Output Settings HRVparams.gen_figs = 0; % Generate figures HRVparams.save_figs = 0; % Save generated figures if HRVparams.save_figs == 1 HRVparams.gen_figs = 1; end % Format settings for HRV Outputs HRVparams.output.format = 'csv'; % 'csv' - creates csv file for output % 'mat' - creates .mat file for output HRVparams.output.separate = 0; % Default : 1 = separate files for each subject % 0 = all results in one file HRVparams.output.num_win = []; % Specify number of lowest hr windows returned % leave blank if all windows should be returned % Format settings for annotations generated HRVparams.output.ann_format = 'binary'; % 'binary' = binary annotation file generated % 'csv' = ASCII CSV file generated %% 19. Filename to Save Data HRVparams.time = datestr(now, 'yyyymmdd'); % Setup time for filename of output HRVparams.filename = [HRVparams.time '_' project_name]; %% Export Parameter as Latex Table % Note that if you change the order of the parameters or add parameters % this might not work ExportHRVparams(HRVparams); end 
submitted by MisuzBrisby to matlab [link] [comments]

The Simo 2020 Index Binary Option 99% How to Trade at igindex or Igmarkets with binary options ... Trader Dynamic Index Binary Options Strategy Volatility 75 Index Strategy for Binary Options 2020 Safe ... Options Trading for Beginners (The ULTIMATE In-Depth Guide ...

A binary option based on a stock index future is a contract used for speculating on a particular stock index, such as the futures derivative of the S&P 500 or the NASDAQ 100. Traders buy or sell binary options depending on whether they think the underlying market will move up or down. Binary options bring a sense of universality to your trading. You can trade multiple assets, multiple markets, multiple timeframes, and multiple sub-classes of options all from the same broker. For those that are looking for diversity , trading index binary options can be a great way to add a new element to your trading without much of a hassle. Binary.com is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some ... A challenge in binary option trading is correctly predicting the sustainability of a trend over a given period. For example, a trader may take the right position for an index, predicting it would ... Trading the Dow Jones Index . Before starting to trade the DJ30 index as a binary options asset, it is pertinent for traders to know that this asset is not open for trading 24 hours of the day. It is only open for trading as from 1330GMT to 2030GMT, Monday to Friday. As such, trade decisions on the DJ30 asset must factor in this time frame.

[index] [3255] [1085] [4708] [1557] [4743] [3695] [1983] [2548] [185] [3821]

The Simo 2020 Index Binary Option 99%

Options trading is a very difficult thing to learn as a beginner, as there are many moving parts and many concepts to learn simultaneously. In this video, my... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. ... TDI SQUEEZE (Traders Dynamic Index) Free Basic Strategy [Watch In HD] - Duration: 21:43. Trading pattern using MACD indicator with CCI indicator for Binary Options Traders. Create Account: http://www.foxtd.com. Are binary options a good idea? If you're thinking about trading binary options, watch this video first. Let's go through the truth about binary options. Is ... https://ExpertBinaryTrader2020.Blogspot.com We Can Make Good Profits in Volatility Index Market. If we apply Compounding Trading Method, Then Sure We Can Mak...

#