So if I am understanding correctly, by expanding the constraints to go from 0% to 100% it should keep all the data sets and come to a relatively consistent result. I tried running the same data set with the changes to the constraints, unfortunately I found the results to be different and are as follows.
Product
Current
Theoretical Change
Optimal
Weighting
Units
Weighting
Units
Weighting
Units
MSFT
14.16%
1.00
-12.09%
-0.85
2.07%
0.15
IBM
44.52%
1.00
-43.30%
-0.97
1.22%
0.03
INTC
13.52%
1.00
15.68%
1.16
29.20%
2.16
SUNW
2.27%
1.00
19.45%
8.57
21.72%
9.57
AMZN
25.53%
1.00
20.27%
0.79
45.80%
1.79
100.00%
100.00%
Product
Current
Theoretical Change
Optimal
Weighting
Units
Weighting
Units
Weighting
Units
MSFT
14.16%
1.00
-13.70%
-0.97
0.46%
0.03
IBM
44.52%
1.00
-43.14%
-0.97
1.38%
0.03
INTC
13.52%
1.00
19.71%
1.46
33.23%
2.46
SUNW
2.27%
1.00
8.41%
3.71
10.68%
4.71
AMZN
25.53%
1.00
28.71%
1.12
54.25%
2.12
100.00%
100.00%
Each was done with 10000 iterations. Can you think of anything else I can try to get a consistant portfolio recommendation? Otherwise is there a particular way that I can interperate the data to make the most use of it?