||||| BUDGET IS NOT YET DETERMINED |||||
Hello there friends! I'm an undergrad studying in the field of Actuarial Science and I was wondering; is there a way we can model a multinomial distribution? This is a little project of mine that I'm doing in my spare time to further help me understand probabilities and statistics.
Let's say I play a game that has 5 outcomes and each outcome has a different probability (+500$ = 35%, -200$ = 30%, -150$ = 20%, −100$ = 5%, −50$ = 10%). Each event is independent of one another. By playing this game, I have an edge and should win on average 75$ everytime I play.
But I only make money 35% of the time; therefore, from a risk standpoint, I want to create a model that can tell me how many times I have to play before being almost certain that I make a profit. Is it 50 times? 100? 1000?
I tried doing it with excel but I got stuck. Therefore I need some help creating this little program with R. I did one already last week but for a binomial distribution (much easier to model).
Here we have a basic discrete probability distribution for the profit (X) on any individual trial:
x 500 -200 -150 -100 -50
P(X=x) .35 .30 .20 .05 .10
It's straightforward to work out the mean and variance of this discrete probability distribution. Let's call these E(X) and Var(X). The profit after n independent trials has a mean of nE(X) and a variance of nVar(X).
By the central limit theorem, this profit (sum of n independent random variables) will be approximately normal for large sample sizes and so we can easily calculate probabilities for the normally distributed random variables.
Alternatively, I gather it would be relatively easy to simulate 1 million trials of this game (in R, say), and check the average profit after 1 trial, 2, 3, ... and go from there.
I need a hand folks. :)
Anybody can work with me on this? :)
Happy holidays everyone!!!