• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.
Status
Not open for further replies.

ginfreely

Alfrescian
Loyal
Agree with you bro


"Lies, damned lies, and statistics" is a phrase describing the persuasive power of numbers, particularly the use of statistics to bolster weak arguments, and the tendency of people to disparage statistics that do not support their positions. It is also sometimes colloquially used to doubt statistics used to prove an opponent's point.

The term was popularised in the United States by Mark Twain (among others), who attributed it to the 19th-century British Prime Minister Benjamin Disraeli (1804–1881): "There are three kinds of lies: lies, damned lies, and statistics." However, the phrase is not found in any of Disraeli's works and the earliest known appearances were years after his death. Other coiners have therefore been proposed. The most plausible, given current evidence, is Englishman Sir Charles Wentworth Dilke (1843–1911).

Statistics: The only science that enables different experts using the same figures to draw different conclusions.
Evan Esar, Esar's Comic Dictionary
American Humorist (1899 - 1995)

I think statistics and numbers cannot lie, it is the manipulation of statistics and hiding of data that can lie. Some statistics that will not lie:

What is the unemployment rate of Singapore-born Singapore citizens? Not unemployment rate of citizens and PRs lumped together!

What is the number of jobs created by the two casinos that went to Singapore born citizens?

What is the number of Singapore citizens who are customers of the casinos and what is the percent out of total casino customers?

What is the amount of revenue earned from these Singaporen gamblers and the percent of total revenue?
 

ginfreely

Alfrescian
Loyal
Putin criticize the dollar position or rather the position that has arisen due to the current political impasse that has created the down grade of the dollar and then he suggests that the ruble can be a reserve currency. He's trying to talk the ruble hence Russia into a position of influence. He's trying very, very hard. I don' believe nations like Japan or for that matter China will allow that to happen. All to do with face and historical baggage!!!!!

Wow Russia has progressed. I remember last time in 90s, my Russia customer told me they rather put their money (USD, not roubles) under the pillow than in the bank! Was told their banks can't be trusted!
 

Analytical Professor

Alfrescian
Loyal
Ouch ...... Stats...... What shall i say?

Id still rather stick with Evan Essar.


A misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.

The false statistics trap can be quite damaging to the quest for knowledge. For example, in medical science, correcting a falsehood may take decades and cost lives.

Misuses can be easy to fall into. Professional scientists, even mathematicians and professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests.


1 Types of misuse
1.1 Discarding unfavorable data
1.2 Loaded questions
1.3 Overgeneralization
1.4 Biased samples
1.5 Misreporting or misunderstanding of estimated error
1.6 False causality
1.7 Proof of the null hypothesis
1.8 Data dredging
1.9 Data manipulation
1.10 Other fallacies


Types of misuse

Diiscarding unfavorable data
In product quality control terms all a company has to do to promote a neutral (useless) product is to find or conduct, for example, 40 studies with a confidence level of 95%. If the product is really useless, this would on average produce one study showing the product was beneficial, one study showing it was harmful and thirty-eight inconclusive studies (38 is 95% of 40). This tactic becomes more effective the more studies there are available. Organizations that do not publish every study they carry out, such as tobacco companies denying a link between smoking and cancer, or miracle pill vendors, are likely to use this tactic.

Another common technique is to perform a study that tests a large number of dependent (response) variables at the same time. For example, a study testing the effect of a medical treatment might use as dependent variables the probability of survival, the average number of days spent in the hospital, the patient's self-reported level of pain, etc. This also increases the likelihood that at least one of the variables will by chance show a correlation with the independent (explanatory) variable.

Loaded questions
The answers to surveys can often be manipulated by wording the question in such a way as to induce a prevalence towards a certain answer from the respondent. For example, in polling support for a war, the questions:
Do you support the attempt by the USA to bring freedom and democracy to other places in the world?
Do you support the unprovoked military action by the USA?
will likely result in data skewed in different directions, although they are both polling about the support for the war.

Another way to do this is to precede the question by information that supports the "desired" answer. For example, more people will likely answer "yes" to the question "Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?" than to the question "Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?"

Overgeneralization

Overgeneralization is a fallacy occurring when a statistic about a particular population is asserted to hold among members of a group for which the original population is not a representative sample.

For example, suppose 100% of apples are observed to be red in summer. The assertion "All apples are red" would be an instance of overgeneralization because the original statistic was true only of a specific subset of apples (those in summer), which is not expected to representative of the population of apples as a whole.

A real-world example of the overgeneralization fallacy can be observed as an artifact of modern polling techniques, which prohibit calling cell phones for over-the-phone political polls. As young people are more likely than other demographic groups to have only a cell phone, rather than also having a conventional "landline" phone, young people are more likely to be liberal[citation needed], and young people who do not own a landline phone are even more likely to be liberal than their demographic as a whole, such polls effectively exclude many voters who are more likely to be liberal.[1]
Thus, a poll examining the voting preferences of young people using this technique could not claim to be representative of young peoples' true voting preferences as a whole without overgeneralizing, because the sample used is not representative of the population as a whole.

Overgeneralization often occurs when information is passed through nontechnical sources, in particular mass media [2][3]

Biased samples

Misreporting or misunderstanding of estimated error

If a research team wants to know how 300 million people feel about a certain topic, it would be impractical to ask all of them. However, if the team picks a random sample of about 1000 people, they can be fairly certain that the results given by this group are representative of what the larger group would have said if they had all been asked.

This confidence can actually be quantified by the central limit theorem and other mathematical results. Confidence is expressed as a probability of the true result (for the larger grOUP) being within a certain range of the estimate (the figure foA misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes statistical fallacy. This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%.
The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has an estimated error of ±6.6% at 99% confidence. ±x% at 95% confidence is always ±1.32x% at 99% confidence.
The smaller the estimated error, the larger the required sample, at a given confidence level.
at 95.4% confidence:
±1% would require 10,000 people.
±2% would require 2,500 people.
±3% would require 1,111 people.
±4% would require 625 people.
±5% would require 400 people.
±10% would require 100 people.
±20% would require 25 people.
±25% would require 16 people.
±50% would require 4 people.
Most people assume, because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct.

Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable.

On the other hand, many people consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled. Many people think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate. A poll with perfect unbiased sampling and truthful answers has a mathematically determined margin of error, which only depends on the number of people polled.
However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%.
There are also many other measurement problems in population surveys.
The problems mentioned above apply to all statistical experiments, not just population surveys.
Further information: Opinion poll, Statistical survey

FALlse causality
Correlation does not imply causation
When a statistical test shows a correlation between A and B, there are usually five possibilities:
A causes B.
B causes A.
A and B both partly cause each other.
A and B are both caused by a third factor, C.
The observed correlation was due purely to chance.
The fifth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the four others.
If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach).
This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (this can happen for many reasons, such as a poorer diet or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed[4] that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines and cancer.[5]
In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that an IRB would accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers ability to empirically test causation.

Proof of the null hypothesis
In a statistical test, the null hypothesis (H0) is considered valid until enough data proves it wrong. Then H0 is rejected and the alternative hypothesis (HA) is considered to be proven as correct. By chance this can happen, although H0 is true, with a probability denoted alpha, the significance level. This can be compared by the judicial process, where the accused is considered innocent (H0) until proven guilty (HA) beyond reasonable doubt (alpha).
But if data does not give us enough proof to reject H0, this does not automatically prove that H0 is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to reject H0). Therefore it is likely—even when smoking is dangerous—that our test will not reject H0. If H0 is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to reject H0, so the test is useless and the value of the "proof" of H0 is also null.
This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a verdict. In other words, "absence of evidence" does not imply "evidence of absence".

Data dredging
Data dredging is an abuse of data mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of a hypothesis to be tested. Since the required confidence interval to establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is a thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study.
Note that data dredging is a valid way of finding a possible hypothesis but that hypothesis must then be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation.

Data manipulation
Informally called "fudging the data," this practice includes selective reporting (see also publication bias) and even simply making up false data.
Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a pattern consistent with the preferred hypothesis while ignoring other results or "data runs" that contradict the hypothesis.
Psychic researchers have long disputed studies showing people with ESP ability. Critics accuse ESP proponents of only publishing experiments with positive results and shelving those that show negative results. A "positive result" is a test run (or data run) in which the subject guesses a hidden card, etc., at a much higher frequency than random chance.[
The deception involved in both cases is that the hypothesis is not confirmed by the totality of the experiments - only by a tiny, selected group of "successful" tests.
Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods.[6]

other fallacies
N = 1 fallacy
Also, the post facto fallacy assumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 1 to 1. This error has led, in the UK, to the false imprisonment of women for murder when the courts were given the prior statistical likelihood of a woman's 3 children dying from Sudden Infant Death Syndrome as being the chances that their already dead children died from the syndrome. This led to statements from Roy Meadow that the chances they had died of Sudden Infant Death Syndrome being millions to one against, convictions were then handed down in spite of the statistical inevitability that a few women would suffer this tragedy. Meadow was subsequently struck off the U.K. Medical Register for giving “erroneous” and “misleading” evidence, although this was later reversed by the courts.r the smaller group). This is the "plus or minus" figure often quoted for statistical surveys. The probability part of the confidence level is usually not mentioned; if so, it is assumed to be a standard number like 95%.
The two numbers are related. If a survey has an estimated error of ±5% at 95% confidence, it also has
an estimated error of ±6.6% at 99% confidence. ±x% at 95% confidence is always ±1.32x% at 99% confidence.
The smaller the estimated error, the larger the required sample, at a given confidence level.
at 95.4% confidence:
±1% would require 10,000 people.
±2% would require 2,500 people.
±3% would require 1,111 people.
±4% would require 625 people.
±5% would require 400 people.
±10% would require 100 people.
±20% would require 25 people.
±25% would require 16 people.
±50% would require 4 people.
Most people assume because the confidence figure is omitted, that there is a 100% certainty that the true result is within the estimated error. This is not mathematically correct.
Many people may not realize that the randomness of the sample is very important. In practice, many opinion polls are conducted by phone, which distorts the sample in several ways, including exclusion of people who do not have phones, favoring the inclusion of people who have more than one phone, favoring the inclusion of people who are willing to participate in a phone survey over those who refuse, etc. Non-random sampling makes the estimated error unreliable.
On the other hand, many people consider that statistics are inherently unreliable because not everybody is called, or because they themselves are never polled. Many people think that it is impossible to get data on the opinion of dozens of millions of people by just polling a few thousands. This is also inaccurate[citation needed]. A poll with perfect unbiased sampling and truthful answers has a mathematically determined margin of error, which only depends on the number of people polled.
However, often only one margin of error is reported for a survey. When results are reported for population subgroups, a larger margin of error will apply, but this may not be made clear. For example, a survey of 1000 people may contain 100 people from a certain ethnic or economic group. The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%.
There are also many other measurement problems in population surveys.
The problems mentioned above apply to all statistical experiments, not just population surveys.
Further information: Opinion poll, Statistical survey

False causality

Correlation does not imply causation
When a statistical test shows a correlation between A and B, there are usually five possibilities:
A causes B.
B causes A.
A and B both partly cause each other.
A and B are both caused by a third factor, C.
The observed correlation was due purely to chance.
The fifth possibility can be quantified by statistical tests that can calculate the probability that the correlation observed would be as large as it is just by chance if, in fact, there is no relationship between the variables. However, even if that possibility has a small probability, there are still the four others.
If the number of people buying ice cream at the beach is statistically related to the number of people who drown at the beach, then nobody would claim ice cream causes drowning because it's obvious that it isn't so. (In this case, both drowning and ice cream buying are clearly related by a third factor: the number of people at the beach).
This fallacy can be used, for example, to prove that exposure to a chemical causes cancer. Replace "number of people buying ice cream" with "number of people exposed to chemical X", and "number of people who drown" with "number of people who get cancer", and many people will believe you. In such a situation, there may be a statistical correlation even if there is no real effect. For example, if there is a perception that a chemical site "dangerous" (even if it really isn't) property values in the area will decrease, which will entice more low-income families to move to that area. If low-income families are more likely to get cancer than high-income families (this can happen for many reasons, such as a poorer diet or less access to medical care) then rates of cancer will go up, even though the chemical itself is not dangerous. It is believed[4] that this is exactly what happened with some of the early studies showing a link between EMF (electromagnetic fields) from power lines and cancer.[5]
In well-designed studies, the effect of false causality can be eliminated by assigning some people into a "treatment group" and some people into a "control group" at random, and giving the treatment group the treatment and not giving the control group the treatment. In the above example, a researcher might expose one group of people to chemical X and leave a second group unexposed. If the first group had higher cancer rates, the researcher knows that there is no third factor that affected whether a person was exposed because he controlled who was exposed or not, and he assigned people to the exposed and non-exposed groups at random. However, in many applications, actually doing an experiment in this way is either prohibitively expensive, infeasible, unethical, illegal, or downright impossible. For example, it is highly unlikely that an IRB would accept an experiment that involved intentionally exposing people to a dangerous substance in order to test its toxicity. The obvious ethical implications of such types of experiments limit researchers ability to empirically test causation.

Proof of the null hypothesis
In a statistical test, the null hypothesis (H0) is considered valid until enough data proves it wrong. Then H0 is rejected and the alternative hypothesis (HA) is considered to be proven as correct. By chance this can happen, although H0 is true, with a probability denoted alpha, the significance level. This can be compared by the judicial process, where the accused is considered innocent (H0) until proven guilty (HA) beyond reasonable doubt (alpha).
But if data does not give us enough proof to reject H0, this does not automatically prove that H0 is correct. If, for example, a tobacco producer wishes to demonstrate that its products are safe, it can easily conduct a test with a small sample of smokers versus a small sample of non-smokers. It is unlikely that any of them will develop lung cancer (and even if they do, the difference between the groups has to be very big in order to reject H0). Therefore it is likely—even when smoking is dangerous—that our test will not reject H0. If H0 is accepted, it does not automatically follow that smoking is proven harmless. The test has insufficient power to reject H0, so the test is useless and the value of the "proof" of H0 is also null.
This can—using the judicial analogue above—be compared with the truly guilty defendant who is released just because the proof is not enough for a verdict. This does not prove the defendant's innocence, but only that there is not proof enough for a verdict. In other words, "absence of evidence" does not imply "evidence of absence".

Data dredging
Data dredging is an abuse of data mining. In data dredging, large compilations of data are examined in order to find a correlation, without any pre-defined choice of a hypothesis to be tested. Since the required confidence interval to establish a relationship between two parameters is usually chosen to be 95% (meaning that there is a 95% chance that the relationship observed is not due to random chance), there is a thus a 5% chance of finding a correlation between any two sets of completely random variables. Given that data dredging efforts typically examine large datasets with many variables, and hence even larger numbers of pairs of variables, spurious but apparently statistically significant results are almost certain to be found by any such study.
Note that data dredging is a valid way of finding a possible hypothesis but that hypothesis must then be tested with data not used in the original dredging. The misuse comes in when that hypothesis is stated as fact without further validation.
Data manipulation
Informally called "fudging the data," this practice includes selective reporting (see also publication bias) and even simply making up false data.
Examples of selective reporting abound. The easiest and most common examples involve choosing a group of results that follow a pattern consistent with the preferred hypothesis while ignoring other results or "data runs" that contradict the hypothesis.
Psychic researchers have long disputed studies showing people with ESP ability. Critics accuse ESP proponents of only publishing experiments with positive results and shelving those that show negative results. A "positive result" is a test run (or data run) in which the subject guesses a hidden card, etc., at a much higher frequency than random chance.
The deception involved in both cases is that the hypothesis is not confirmed by the totality of the experiments - only by a tiny, selected group of "successful" tests.
Scientists, in general, question the validity of study results that cannot be reproduced by other investigators. However, some scientists refuse to publish their data and methods.[6]
Other fallacies
N = 1 fallacy
Also, the post facto fallacy assumes that an event for which a future likelihood can be measured had the same likelihood of happening once it has already occurred. Thus, if someone had already tossed 9 coins and each has come up heads, people tend to assume that the likelihood of a tenth toss also being heads is 1023 to 1 against (which it was before the first coin was tossed) when in fact the chance of the tenth head is 1 to 1. This error has led, in the UK, to the false imprisonment of women for murder when the courts were given the prior statistical likelihood of a woman's 3 children dying from Sudden Infant Death Syndrome as being the chances that their already dead children died from the syndrome. This led to statements from Roy Meadow that the chances they had died of Sudden Infant Death Syndrome being millions to one against, convictions were then handed down in spite of the statistical inevitability that a few women would suffer this tragedy. Meadow was subsequently struck off the U.K. Medical Register for giving “erroneous” and “misleading” evidence, although this was later reversed by the courts.


Originally Posted by Whathefish
Whether u like it or not, professionals uses stats in many fields, inclusive of engineering. If x amt of part A in a engine production line fails often, it just means it has higher probability of failing, it's irrelevant whether u as a car buyer kenna the unlucky batch or not, the stats stays.

Good schools in sg produces higher stats of gd students, irregardless if your child belongs to successful or unsuccessful tier, just that more ends up successful.

Stats in the most simplest form = probabilty. Tell yourself you do not belief probability of heads vs tails is 50percent. Whether or not as individual u get a series of heads cannot change the probability as a whole.

Without stats, many of us in certain professions no need to work already.



Whether u like it or not, professionals uses stats in many fields, inclusive of engineering. If x amt of part A in a engine production line fails often, it just means it has higher probability of failing, it's irrelevant whether u as a car buyer kenna the unlucky batch or not, the stats stays.

Good schools in sg produces higher stats of gd students, irregardless if your child belongs to successful or unsuccessful tier, just that more ends up successful.

Stats in the most simplest form = probabilty. Tell yourself you do not belief probability of heads vs tails is 50percent. Whether or not as individual u get a series of heads cannot change the probability as a whole.

Without stats, many of us in certain professions no need to work already.
 

Analytical Professor

Alfrescian
Loyal
Russia has really progressed under Putin.

It still is the largest country in the world.

Wow Russia has progressed. I remember last time in 90s, my Russia customer told me they rather put their money (USD, not roubles) under the pillow than in the bank! Was told their banks can't be trusted!
 

avalon74

Alfrescian
Loyal
Hi Avalon74

yes nothing last forever. you are right water proofing suppose to last min 3 yrs. however how many people really do water proofing the right way? Water proofing is only one part of the equation to a non leaky toilet roof. there is much more involved. it's' all begins from the start. Do it right the first time and never have to get back again. Under the hot sun, under the rain , humans tend to operate differently. they will think one tile only never mind, this is where the problem starts. When i was buidling roofs one small mistake, we have to change the whole tile. we cannot ali baba. even the simple roofs that we build we made sure every joint was connected to each other, if cannot we had to modify it and make sure they overlap nicely. it's really hard when you are inexperience but when you have an old hand with you , nothing to worry.

look around all the new sights / developments all mainly young indonesians. you don't see like one old one young.

Your sis condo was highest floor or had someone above her? I did my own water proofing as i didn't trust contractors, i didn't want my neighbour to suffer if anything went wrong. the normal stuff the skilled worker did and i double checked every single coner before they laid the cementing works. once there is a problems there is no ending.
Hi BFF,
My sis unit is mid level in the condo, the waterproofing is part of the standard within the package deal.. when leak happen, she got her upstairs neighbour & maintenance crew involved as upstairs need to redo the water proofing..
 

Analytical Professor

Alfrescian
Loyal
Actually not.... Veg wont suffice...

Cathy get the digi one rm 15 per week good enough for browsing minus video streaming

Now p1 also in setia tropika

Here, here... i don't hv internet connection in jb remember?

Are you sure that vege is all you need? LOL!

My house is not ready yet so very malu to invite people to my house lah....will need another few months to fix it all up... my hubby had been working on it all weekends and holidays liao...still hammering away! :wink:
 

Whathefish

Alfrescian
Loyal
Ouch ...... Stats...... What shall i say?

Id still rather stick with Evan Essar.


A misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.

The false statistics trap can be quite damaging to the quest for knowledge. For example, in medical science, correcting a falsehood may take decades and cost lives.

Misuses can be easy to fall into. Professional scientists, even mathematicians and professional statisticians, can be fooled by even some simple methods, even if they are careful to check everything. Scientists have been known to fool themselves with statistics due to lack of knowledge of probability theory and lack of standardization of their tests.

......
Bro, thank you for pasting the huge wall of text to guide us on how to use stats properly to prevent erroneous entries. But like i've mentioned, poor parameters and wrong usage or interpretation are the problems with lots of poorly concluded stats, it doesn't mean that the stats were wrong. But can you not understand stats > coffeeshop talk? What is the best replacement other than stats and law of large numbers? Does a engine factory change their specs because you and your group of kakis said it's bad? Or does it takes from a large sample to come up with a proper conclusion? Does the insurance company revise their premium because you and your kakis think it's too expensive, and not based on actuaries' projection from large numbers?

You might not like stats, but it's everywhere in our lives, useful and relevant. In many sectors of the society which without with, we'll be just soothsayers/gossipers whose conclusions lies within our own sphere of influence.
 

Analytical Professor

Alfrescian
Loyal
Aiya..... This ipad giving me headache....

Cant bold .... Cant hi light.....

Nothing beats using desk top....

My statistics post.... I spent o much time but failed to edit it....

So give up...
 

Analytical Professor

Alfrescian
Loyal
evan esar may be a humorist.... And we may wanna laugh at his jokes and him literally..

But what about you? Me? We are no where close..... Probably even without being humorists the world is laughing at us in our offices, neighbourhood, in this forumand amongst our many relatives....

I doubt you/I can deny that..

stats anyone?

Also, i personally beg to differ on the trustworthiness of Evan Esar, a Humorist? Each to his owns, cheers.
 
Last edited:

avalon74

Alfrescian
Loyal
Hi Bro,

i so envy of you that you can move in b4 7 th month. For me, i have to drag another 1 month to move in Sept. Arrange of delivery also headaches, druing Hari Raya & malaysia national day. By the time should be in mid sept. Cheers!
Bro,
So where is your new place?
Depends on whether you superstitous or not.. most of the deliveries are indeed impacted by the hari raya festive period..
full payments for renovation & furniture.. awaiting the 3+2 sofa set soon..
A 3 day stay in SEG.. glad to say, no mosquitoes during the evenings..
Fixed up the antenna (cheapo type) but only can receive the malaysian channels..
For internet connections, checked with the residents club & they say signals weak so only TM for now.. (may need to ask various mobile operators to confirm)
 

avalon74

Alfrescian
Loyal
The Setia office arrange for them to come down for collection yearly. You can time your payment till their next visit.
Problem I had is with the bills in Malay.. argh..
Paid MPJBT through internet banking.. also setup for JAS & TN to ease the payment issues..
However only did the quit rent this year thru TRC, any idea how to pay next time? heard no invoices for quit rent..
 

Analytical Professor

Alfrescian
Loyal
Some of them......

Facts are stubborn, but statistics are more pliable. Mark Twain

Definition of Statistics: The science of producing unreliable facts from reliable figures. Evan Esar

"there are three kinds of lies: lies, damned lies and statistics"
Disraeli

"the only statistics you can trust are those you falsified yourself"
Winston Churchill

"a little inaccuracy saves a world of explanation"
Clarence Edwin Ayres

If all the statisticians in the world were laid head to toe, they wouldn't be able to reach a conclusion" Anon., after comment on economists by G. B. Shaw





. In many sectors of the society which without with, we'll be just soothsayers/gossipers whose conclusions lies within our own sphere of influence.
 
Status
Not open for further replies.
Top