Search Results

Search found 2867 results on 115 pages for 'frequency distribution'.

Page 9/115 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Fedora distribution update pop-up after fresh installation

    - by Sayan Ghosh
    Hi, We do a kickstart installation of FC-10 at our place. I am quite intimidated by the distribution update pop up that comes up after the O/S installation. I want a keyword to put into the kickstart file that would stop Fedora from intimating with an update pop-up. Is it possible to include such a switch in the kickstart OR a script that could be added to post.bash? Thanks, Sayan

    Read the article

  • iPhone Icon file in bundle blank when building for App store distribution

    - by Boiler Bill
    I have been spinning my wheels for a couple hours on why when I build my app with my distribution cert with the device as the target the Icon.png file in the bundle is empty. If I build with my developer cert or against the simulator the Icon.png in the bundle matches the one in my project file. I have verified my Icon.png is 57X57, has no alpha channel, had extra finder attributes removed. I even took one of the Icon.png files from my first application that is in the store today, and it didn't work either. Here is the output from the build results: CopyPNGFile build/Distribution-iphoneos/myApp.app/Icon.png Icon.png cd /Users/wrbarbour/projects/myAppTWO/myApp setenv COPY_COMMAND /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" "/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneOS Build System Support.xcplugin/Contents/Resources/copypng" -compress "" /Users/wrbarbour/projects/myAppTWO/myApp/Icon.png /Users/wrbarbour/projects/myAppTWO/myApp/build/Distribution-iphoneos/myApp.app/Icon.png Can someone get me pointed in the right direction?

    Read the article

  • Random number within a range based on a normal distribution

    - by ConfusedAgain
    I'm math challenged today :( I want to generate random numbers with a range (n to m, eg 100 to 150), but instead of purely random I want the results to be based on the normal distribution. By this I mean that in general I want the numbers "clustered" around 125. I've found this random number package that seems to have a lot of what I need: http://beta.codeproject.com/KB/recipes/Random.aspx It supports a variety of random generators (include mersiene twister) and can apply the generator to a distribution. But I'm confused... if I use a normal distribution generator the random numbers are from roughly -6 to +8 (apparently the true range is float.min to float.max). How do a scale that to my required range?

    Read the article

  • Selecting item from set given distribution

    - by JH
    I have a set of X items such as {blower, mower, stove} and each item has a certain percentage of times it should be selected from the overall set {blower=25%,mower=25%,stove=75%} along with a certain distribution that these items should follow (blower should be selected more at the beginning of selection and stove more at the end). We are given a number of objects to be overall selected (ie 100) and a overall time to do this in (say 100 seconds). I was thinking of using a roulette wheel algorithm where the weights on the wheel are affected by the current distribution as a function of the elapsed time (and the allowed duration) so that simple functions could be used to determine the weight. Are there any common approaches to problems like this that anyone is aware of? Currently i have programmed something similar to this in java using functions such as x^2 (with correct normalization for the weights) to ensure that a good distribution occurs. Other suggestions or common practices would be welcome :-)

    Read the article

  • R: Given a set of random numbers drawn from a continuous univariate distribution, find the distribut

    - by knorv
    Given a set of real numbers drawn from a unknown continuous univariate distribution (let's say is is one of beta, Cauchy, chi-square, exponential, F, gamma, Laplace, log-normal, normal, Pareto, Student's t, uniform and Weibull).. x <- c(15.771062,14.741310,9.081269,11.276436,11.534672,17.980860,13.550017,13.853336,11.262280,11.049087,14.752701,4.481159,11.680758,11.451909,10.001488,11.106817,7.999088,10.591574,8.141551,12.401899,11.215275,13.358770,8.388508,11.875838,3.137448,8.675275,17.381322,12.362328,10.987731,7.600881,14.360674,5.443649,16.024247,11.247233,9.549301,9.709091,13.642511,10.892652,11.760685,11.717966,11.373979,10.543105,10.230631,9.918293,10.565087,8.891209,10.021141,9.152660,10.384917,8.739189,5.554605,8.575793,12.016232,10.862214,4.938752,14.046626,5.279255,11.907347,8.621476,7.933702,10.799049,8.567466,9.914821,7.483575,11.098477,8.033768,10.954300,8.031797,14.288100,9.813787,5.883826,7.829455,9.462013,9.176897,10.153627,4.922607,6.818439,9.480758,8.166601,12.017158,13.279630,14.464876,13.319124,12.331335,3.194438,9.866487,11.337083,8.958164,8.241395,4.289313,5.508243,4.737891,7.577698,9.626720,16.558392,10.309173,11.740863,8.761573,7.099866,10.032640) .. is there some easy way in R to programmatically and automatically find the most likely distribution and the estimated distribution parameters?

    Read the article

  • Why have we got so many Linux distributions? [closed]

    - by nebukadnezzar
    Pointed to from an answer to another question, I came across this graphic, and I'm shocked how many linux distributions currently exist. However, it seems that most of these distributions are forks of already popular distributions with minimal changes, usually limited to themes, wallpapers and buttons. It would still seem easier to create a sub-distribution with the required changes, such as XUbuntu with XFCE4, KUbuntu with KDE4, Fluxbuntu with Fluxbox, etc. In my mind there are a number of problems with having so many distributions - perhaps less security/stability due to smaller group of developers, and also the confusingly vast range of choice for newcomers to Linux. Some reasons that developers might decide to fork are: Specializing on a particular topic (work-related topic - i.e. for a Hospital, etc) An exceptional architecture, that requires a special set of software Use of non-FOSS, proprietary technology, and such So what other reasons are there that have caused so many people decided to create their own distributions? What are the thought processes that have led to this? And are these "valid" reasons - do we need so many distributions? If you can back your experiences up with references that would be great.

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • Exchange 2010 Cmdlets for Sent Items in a Distribution group

    - by Jimmy Jones
    Here is the scenario that I am in and am stuck on how to get this the correct way. What I'm looking for is a syntax that will provide me with statistics on what users have emailed "Sent" for the day. I would like to know get information on what all users of a specific distribution group has emailed for the day. I have tried the following to no avail. Get-Mailbox | Get-MailboxFolderStatistics -FolderScope SentItems | Where {$_.ItemsInFolder -gt 0} | -Start "06/14/2012 9:00AM" -End "06/14/2012 5:00PM" | Sort-Object -Property ItemsInFolder -Descending | select-object Identity,ItemsInFolder | export-csv c:\test.txt Get-MessageTrackingLog -Start "06/14/2012 9:00AM" -End "06/14/2012 5:00PM" -Sender "" | measure-object - This one will only work on specified users, but I need to check the whole group. If anyone could help me out. Thank you!!!

    Read the article

  • Send mail from a distribution group's email address

    - by Campo
    A user has send permission on a distro group on a WINDOWS SERVER 2003 domain. I am the admin. When either of us sends email using the distribution group's email adress we get a non delivery report Your message did not reach some or all of the intended recipients. Subject: TEST Sent: 4/19/2010 4:46 PM The following recipient(s) cannot be reached: [email protected] on 4/19/2010 4:46 PM You do not have permission to send to this recipient. For assistance, contact your system administrator. MSEXCH:MSExchangeIS:/DC=local/DC=DOMAIN:SERVERNAME Thanks, JC

    Read the article

  • Best Small Linux Distribution for rDesktop

    - by d2jxp
    What would be the best linux distribution to use just for the purpose of rDesktop? We're trying to decide if we should get rid of old computers or just use them as thin-clients to connect to virtual Windows 7 machines on our network. I would like something with as little bloat as possible and can be run from a USB flash drive. I have tried SliTaz, ThinStation, and Pixil from Century Software. SliTaz has been my favorite so far but I still want to know if there's something better that's also easily customizable.

    Read the article

  • Multiple WAPs: Bandwidth, Frequency Considerations

    - by Pete Cresswell
    The router in my LAN closet does 2 and 5 GHz. In the kitchen, I have a single-band 2 GHz WAP, and in the garden shed I have another single-band 2 GHz WAP. All are set to Bandwidth = 40 MHz, Wireless Network Mode = N-Only. The kitchen WAP and the LAN closet router both come up with multiple bars on my smart phone from almost anywhere in the house. The garden shed WAP will register one bar... but only sometimes. The Questions: Are these things in danger of butting heads? Should I re-set them to Bandwidth = 20 MHz? Bandwidth = Auto? Are there any tools that I could use on an Android smart phone, iPod, or WiFi-enabled laptop to make my own analysis?

    Read the article

  • Frequency of RAM

    - by Vignesh Palani
    I have a very old system hp compaq dx2080. It had 1gb of RAM. I recently bought a EVM DDR2 1 GB PC RAM which had a clock rate of 667Mhz. I have dual booting windows 7 and 8. When I installed it, windows 7 was still using the older 1gb. It showed as 2 gb available and 1gb usable in system properties. I searched around and found that I can change it to max in the msconfig. I did so. I set it 2048. Still, it was using only 1gb. When, I switched to 8, it was using the 2gb. Now, for my question: My system only supported 553Mhz and 667Mhz RAM. In the BIOS, I saw that the new RAM was showing as 800Mhz. Rechecked using speccy and cpu-z. It showed different values between the two. The RAM is labeled as 667Mhz over it. No mistake in that. But, am I missing something? Please help. And, can I continue using it? My point again. There are only two slots.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • Video streaming and internet browsing on different bands/frequency

    - by user47207
    I have a Netgear WDNR37000 which allows clients on a 2ghz or 5ghz to access the internet and see every client and device on the network. I have a computer with two nics, one that is in the 2ghz range and the other on the 5ghz range. My specific problem is that I would like to serve my video streams (hulu, ps3mediaserver, playon) to my ps3 on the 5ghz band while internet browsing is routed to the 2ghz band. This is so that the video streams aren't affected by general internet use. While the easiest solution would be to disable internet access on the 5ghz apn, I would like to know of a solution that would not require that.

    Read the article

  • Nginx load distribution and multi-domain SSL

    - by Steve Clark
    I'm researching into the best methods of two new parts of our infrastructure, hopefully finding a single solution for both. 1) We're currently running a single application server, and we're going to be adding an additional application server and load balance between the two. 2) We handle a few thousand domains across the application server(s), and we're looking to support SSL. The best method i've come across so far is using nginx for it's Load Distribution to serve the requests to the application servers, and for it's SSL support. If a request is using SSL, nginx accepts the request on, terminates SSL and pipes to apache (app servers). Now, that's all good, but i'm yet to figure out how we can let nginx handle multiple domains using SSL. We're potentially looking at using UCC SSL Certs, so we can support 150 domains on a single certificate, with each cert on a single IP. I'm all new to this (My experience is just with physical load balancers and a single domains on SSL), so any advice would be very much appreciated.

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • iPhone application not running with Distribution Certificate

    - by MN
    I have a problem regarding running my application with a Distribution certificate. The application will work fine with a Development certificate on my iPod Touch. Once I change the Code Signing Identity to a Distribution certificate and specify an Entitlements file, the application crashes on launch. Is there anything I can do to correct this? Thanks in advance!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >