Search Results

Search found 2867 results on 115 pages for 'frequency distribution'.

Page 96/115 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • CRM 2011 - Set/Retrieve work hours programmatically

    - by Philip Rich
    I am attempting to retrieve a resources work hours to perform some logic I require. I understand that the CRM scheduling engine is a little clunky around such things, but I assumed that I would be able to find out how the working hours were stored in the DB eventually... So a resource has associated calendars and those calendars have associated calendar rules and inner calendars etc. It is possible to look at the start/end and frequency of aforementioned calendar rules and query their codes to work out whether a resource is 'working' during a given period. However, I have not been able to find the actual working hours, the 9-5 shall we say in any field in the DB. I even tried some SQL profiling while I was creating a new schedule for a resource via the UI, but the results don't show any work hours passing to SQL. For those with the patience the intercepted SQL statement is below:- EXEC Sp_executesql N'update [CalendarRuleBase] set [ModifiedBy]=@ModifiedBy0, [EffectiveIntervalEnd]=@EffectiveIntervalEnd0, [Description]=@Description0, [ModifiedOn]=@ModifiedOn0, [GroupDesignator]=@GroupDesignator0, [IsSelected]=@IsSelected0, [InnerCalendarId]=@InnerCalendarId0, [TimeZoneCode]=@TimeZoneCode0, [CalendarId]=@CalendarId0, [IsVaried]=@IsVaried0, [Rank]=@Rank0, [ModifiedOnBehalfBy]=NULL, [Duration]=@Duration0, [StartTime]=@StartTime0, [Pattern]=@Pattern0 where ([CalendarRuleId] = @CalendarRuleId0)', N'@ModifiedBy0 uniqueidentifier,@EffectiveIntervalEnd0 datetime,@Description0 ntext,@ModifiedOn0 datetime,@GroupDesignator0 ntext,@IsSelected0 bit,@InnerCalendarId0 uniqueidentifier,@TimeZoneCode0 int,@CalendarId0 uniqueidentifier,@IsVaried0 bit,@Rank0 int,@Duration0 int,@StartTime0 datetime,@Pattern0 ntext,@CalendarRuleId0 uniqueidentifier', @ModifiedBy0='EB04662A-5B38-E111-9889-00155D79A113', @EffectiveIntervalEnd0='2012-01-13 00:00:00', @Description0=N'Weekly Single Rule', @ModifiedOn0='2012-03-12 16:02:08', @GroupDesignator0=N'FC5769FC-4DE9-445d-8F4E-6E9869E60857', @IsSelected0=1, @InnerCalendarId0='3C806E79-7A49-4E8D-B97E-5ED26700EB14', @TimeZoneCode0=85, @CalendarId0='E48B1ABF-329F-425F-85DA-3FFCBB77F885', @IsVaried0=0, @Rank0=2, @Duration0=1440, @StartTime0='2000-01-01 00:00:00', @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA', @CalendarRuleId0='0A00DFCF-7D0A-4EE3-91B3-DADFCC33781D' The key parts in the statement are the setting of the pattern:- @Pattern0=N'FREQ=WEEKLY;INTERVAL=1;BYDAY=SU,MO,TU,WE,TH,FR,SA' However, as mentioned, no indication of the work hours set. Am I thinking about this incorrectly or is CRM doing something interesting around these work hours? Any thoughts greatly appreciated, thanks.

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • How to get timestamp of tick precision in .NET / C#?

    - by Hermann
    Up until now I used DateTime.Now for getting timestamps, but I noticed that if you print DateTime.Now in a loop you will see that it increments in descrete jumps of approx. 15 ms. But for certain scenarios in my application I need to get the most accurate timestamp possible, preferably with tick (=100 ns) precision. Any ideas? Update: Apparently, StopWatch / QueryPerformanceCounter is the way to go, but it can only be used to measure time, so I was thinking about calling DateTime.Now when the application starts up and then just have StopWatch run and then just add the elapsed time from StopWatch to the initial value returned from DateTime.Now. At least that should give me accurate relative timestamps, right? What do you think about that (hack)? NOTE: StopWatch.ElapsedTicks is different from StopWatch.Elapsed.Ticks! I used the former assuming 1 tick = 100 ns, but in this case 1 tick = 1 / StopWatch.Frequency. So to get ticks equivalent to DateTime use StopWatch.Elapsed.Ticks. I just learned this the hard way. NOTE 2: Using the StopWatch approach, I noticed it gets out of sync with the real time. After about 10 hours, it was ahead by 5 seconds. So I guess one would have to resync it every X or so where X could be 1 hour, 30 min, 15 min, etc. I am not sure what the optimal timespan for resyncing would be since every resync will change the offset which can be up to 20 ms.

    Read the article

  • Connecting grouped dots/points on a scatter plot based on distance

    - by ToNoY
    I have 2 sets of depth point measurements, for example: > a depth value 1 2 2 2 4 3 3 6 4 4 8 5 5 16 40 6 18 45 7 20 58 > b depth value 1 10 10 2 12 20 3 14 35 I want to show both groups in one figure plotted with depth and with different symbols as you can see here plot(a$value, a$depth, type='b', col='green', pch=15) points(b$value, b$depth, type='b', col='red', pch=14) The plot seems okay, but the annoying part is that the green symbols are all connected (though I want connected lines also). I want connection only when one group has a continued data points at 2 m interval i.e. the symbols should be connected with a line from 2 to 8 m (green) and then group B symbols should be connected from 10-14 m (red) and again group A symbols should be connected (green), which means I do NOT want to see the connection between 8 m sample with the 16 m for group A. An easy solution may be dividing the group A into two parts (say, A-shallow and A-deep) and then plotting A-shallow, B, and A-deep separately. But this is completely impractical because I have thousands of data points with hundreds of groups i.e. I have to produce many depth profiles. Therefore, there has to be a way to program so that dots are NOT connected beyond a prescribed frequency/depth interval (e.g. 2 m in this case) for a particular group of samples. Any idea?

    Read the article

  • Procesing 16bit sample audio

    - by user2431088
    Right now i have an audio file (2 Channels, 44.1kHz Sample Rate, 16bit Sample size, WAV) I would like to pass it into this method but i am not sure of any way to convert the WAV file to a byte array. /// <summary> /// Process 16 bit sample /// </summary> /// <param name="wave"></param> public void Process(ref byte[] wave) { _waveLeft = new double[wave.Length / 4]; _waveRight = new double[wave.Length / 4]; if (_isTest == false) { // Split out channels from sample int h = 0; for (int i = 0; i < wave.Length; i += 4) { _waveLeft[h] = (double)BitConverter.ToInt16(wave, i); _waveRight[h] = (double)BitConverter.ToInt16(wave, i + 2); h++; } } else { // Generate artificial sample for testing _signalGenerator = new SignalGenerator(); _signalGenerator.SetWaveform("Sine"); _signalGenerator.SetSamplingRate(44100); _signalGenerator.SetSamples(16384); _signalGenerator.SetFrequency(5000); _signalGenerator.SetAmplitude(32768); _waveLeft = _signalGenerator.GenerateSignal(); _waveRight = _signalGenerator.GenerateSignal(); } // Generate frequency domain data in decibels _fftLeft = FourierTransform.FFTDb(ref _waveLeft); _fftRight = FourierTransform.FFTDb(ref _waveRight); }

    Read the article

  • Changing a Select dropdown with a jquery ui slider

    - by Chris J. Lee
    Trying to set a select dropdown with a slider. You move the jquery ui slider and then it will change the selection of the other two dropdowns. is there a current method in jquery that would set these options? Current dropdown: <select id="alert-options-frequency-opts"> <option value=""></option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> </select>

    Read the article

  • What is the best way to implement a callback scenario using WCF and ASP.NET MVC?

    - by Mark Struzinski
    I am new to WCF. I just finished reading Learning WCF and I think I've got a pretty good grasp of the fundamentals. I am adding functionality to a line of business app that runs on ASP.NET MVC entirely inside the corporate LAN. I am calling into a service that will also send me events as they occur (and not as responses to service calls). These events can occur at any point during the user's session. I have the service written, and it is able to pick up these events. What would be the best way to deliver these events to the user? My initial thought is to run the WCF service in duplex mode over net TCP and implement the events as callbacks. Using this scenario, the best way I can think up to deliver the events to the user is a dictionary object stored in the session. The dictionary would be populated by the callbacks and polled on a set frequency for delivery via AJAX calls. Has anyone dealt with this scenario? Is there a more efficient way to implement this?

    Read the article

  • C++ Bubble Sorting for Singly Linked List [closed]

    - by user1119900
    I have implemented a simple word frequency program in C++. Everything but the sorting is OK, but the sorting in the following script does not work. Any emergent help will be great.. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <iostream> #include <fstream> #include <cstdio> using namespace std; #include "ProcessLines.h" struct WordCounter { char *word; int word_count; struct WordCounter *pNext; // pointer to the next word counter in the list }; /* pointer to first word counter in the list */ struct WordCounter *pStart = NULL; /* pointer to a word counter */ struct WordCounter *pCounter = NULL; /* Print statistics and words */ void PrintWords() { ... pCounter = pStart; bubbleSort(pCounter); ... } //end-PrintWords void bubbleSort(struct WordCounter *ptr) { WordCounter *temp = ptr; WordCounter *curr; for (bool didSwap = true; didSwap;) { didSwap = false; for (curr = ptr; curr->pNext != NULL; curr = curr->pNext) { if (curr->word > curr->pNext->word) { temp->word = curr->word; curr->word = curr->pNext->word; curr->pNext->word = temp->word; didSwap = true; } } } }

    Read the article

  • Avoiding repetition with libraries that use a setup + execute model

    - by lijie
    Some libraries offer the ability to separate setup and execution, esp if the setup portion has undesirable characteristics such as unbounded latency. If the program needs to have this reflected in its structure, then it is natural to have: void setupXXX(...); // which calls the setup stuff void doXXX(...); // which calls the execute stuff The problem with this is that the structure of setupXXX and doXXX is going to be quite similar (at least textually -- control flow will prob be more complex in doXXX). Wondering if there are any ways to avoid this. Example: Let's say we're doing signal processing: filtering with a known kernel in the frequency domain. so, setupXXX and doXXX would probably be something like... void doFilter(FilterStuff *c) { for (int i = 0; i < c->N; ++i) { doFFT(c->x[i], c->fft_forward_setup, c->tmp); doMultiplyVector(c->tmp, c->filter); doFFT(c->tmp, c->fft_inverse_setup, c->x[i]); } } void setupFilter(FilterStuff *c) { setupFFT(..., &(c->fft_forward_setup)); // assign the kernel to c->filter ... setupFFT(..., &(c->fft_inverse_setup)); }

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • Data base design with Blob

    - by mmuthu
    Hi, I have a situation where i need to store the binary data into database as blob column. There are three different table exists in my database where in i need to store a blob data for each record. Not every record will have the blob data all the time. It is time and user based. The table one will have to store the *.doc files almost for all the record The table two will have to store the *.xml optionally. The table three will have to store images (not sure what is frequency, etc) Now my questions is whether it is a good idea to maintain a separate table to store the blob data pointing it to the respective table PK's (Yes, there will be no FK's and assuming program will maintain it). It will be some thing like below, BLOB|PK_ID|TABLE_NAME Alternatively, is it a good idea to keep the blob column in respective tables. As for as my application runtime is concerned, The table 2 will be read very frequently. Though the blob column will not be required. The table 2 record will gets deleted frequently. Similarly other blob data in respective table will not be accessed frequently. All of the blob content will be read on-demand basis. I'm thinking first approach will work better for me. What do you guys think? Btw, I'm using Oracle.

    Read the article

  • printSoln module problem

    - by dingo_d
    Hi I found in book:Numerical Methods in engineering with Python the module run_kut5, but for that module I need module printSoln, all provided in the book. Now I cp the code, made necessary line adjustments and so. The code looks like: # -*- coding: cp1250 -*- ## module printSoln ''' printSoln(X,Y,freq). Prints X and Y returned from the differential equation solvers using printput frequency ’freq’. freq = n prints every nth step. freq = 0 prints initial and final values only. ''' def printSoln(X,Y,freq): def printHead(n): print "\n x ", for i in range (n): print " y[",i,"] ", print def printLine(x,y,n): print "%13.4e"% x,f for i in range (n): print "%13.4e"% y[i], print m = len(Y) try: n = len(Y[0]) except TypeError: n = 1 if freq == 0: freq = m printHead(n) for i in range(0,m,freq): printLine(X[i],Y[i],n) if i != m - 1: printLine(X[m - 1],Y[m - 1],n) Now, when I run the program it says: line 24, in <module> m = len(Y) NameError: name 'Y' is not defined But I cp'd from the book :\ So now when I call the run_kut module I get the same error, no Y defined in printSoln... I'm trying to figure this out but I suck :( Help, please...

    Read the article

  • Windows 7 is shutting down unexpectedly, according to the logs.

    - by dlamblin
    Here's a message from my eventvwr EventLog (Windows Logs System): The previous system shutdown at 11:51:15 AM on ?7/?29/?2009 was unexpected. This is funny because I was wondering why the system shut down while I was playing Civilizations IV full screen. Now I know. It was unexpected. Has anyone encountered and resolved this? A little background: I am running Windows 7 RC inside VMWare Fusion 2 (just updated a few months back) on a MacBook (Bitterly not Pro) aluminum-body. Windows 7 occasionally will shut down. This isn't a quick turn-off, it's a shutdown where all the programs are exited, the system waits until they quit (and Civ4 doesn't prompt me to save), it even installed Windows Updates before restarting. And yes it is restarting right after the shutdown. Because I run a game in full screen mode I do not notice any dialog with a countdown timer or anything like that that might be a warning. As I have iStat on my dashboard widgets I can see about 8 temperature monitors. I have seen the CPU get up to 74C before, but during the shutdown, though it seemed hot to the touch (always is), it read 61C for the CPU, 60C for heatsink A, 50C for heatsink B and in the 30s-40s for the enclosure and harddrives. As I type this now, the temps are actually higher, so I don't think the temperature caused it. I have at least six such events dating first from 5/17 which was a week after installing Windows 7. I did find one information level warning from USER32 in the system log that says: The process C:\Windows\system32\svchost.exe (DLAMBLIN-WIN7) has initiated the restart of computer DLAMBLIN-WIN7 on behalf of user NT AUTHORITY\SYSTEM for the following reason: Operating System: Recovery (Planned) Reason Code: 0x80020002 Shutdown Type: restart Comment: And another 15 minutes before that from Windows Update: Restart Required: To complete the installation of the following updates, the computer will be restarted within 15 minutes: - Cumulative Security Update for Internet Explorer 8 for Windows 7 Release Candidate for x64-based Systems (KB972260) Which I think kind of explains it. Though I don't know why restarting after an update would create an error event of "shutdown was unexpected", isn't that pretty odd? Now, how do I set it to never restart after an update unless I click something. Application of solution: As fretje reminded me, there's a couple of configurable settings for this, in windows 7 they're much in the same place as in Windows 2000 SP3 and XP SP1. Running gpedit.msc pops up a window that looks like: Windows 7 has changed the order and added a couple of newer options I've italicized: Do not display 'Install Updates and Shut Down' in Shut Down Windows dialog box Do not adjust default option to 'Install Updates and Shut Down' in Shut Down Windows dialog box Enabling Windows Power Management to automatically wake up the system to install scheduled updates Configure Automatic Updates Specify intranet Microsoft update service location Automatic Updates detection frequency Allow non-administrators to receive update notifications Turn on Software Notifications Allow Automatic Updates immediate installation Turn on recommended updates via Automatic Updates No auto-restart with logged-on users for scheduled Automatic Updates Re-prompt for restart with scheduled installations. Delay Restart for scheduled installations Reschedule Automatic Updates schedule

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Can't detect wireless networks, running ubuntu 9.10 on eeepc 1000he

    - by David Brown
    Hi, I am running Ubuntu 9.10 on an ASUS eeepc 1000HE, and my wireless card, RaLink RT 2860, is failing to recognize any wireless networks. It was working fine this morning, until I accidentally hit Fn F2, disabling the wireless card. I hit Fn F2 again to reenable it, but it no longer will detect any wireless networks (and other computers in the household do recognize them). Any help at all will be greatly appreciated! I googled quite a bit and talked to a human about this but could not fix it. We tried dhclient ra0, which returned There is already a pid file /var/run/dhclient.pid with pid 2438 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.2 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/ra0/00:25:d3:13:f4:20 Sending on LPF/ra0/00:25:d3:13:f4:20 Sending on Socket/fallback DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 9 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. No working leases in persistent database - sleeping. Other data: iwconfig returns (other stuff here...) ra0 RT2860 Wireless ESSID:"" Nickname:"RT2860STA" Mode:Auto Frequency=2.412 GHz Bit Rate=1 Mb/s RTS thr:off Fragment thr:off Link Quality=10/100 Signal level:0 dBm Noise level:-87 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 ifconfig returns (other stuff here...) ra0 Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet6 addr: fe80::225:d3ff:fe13:f420/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:583 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 ra0:avahi Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet addr:169.254.7.117 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 Interrupt:19

    Read the article

  • NTPD issue - syncs then slowly loses ground

    - by ethrbunny
    RHEL 5 workstation. Has been running smoothly for years. I did a 'pup' recently and followed with a nice, cleansing reboot. Afterwards the system had some startup issues: namely MySQL refused to start. It just went "...." for 5-10 minutes before I did another boot and skipped that step (using 'interactive'). This was the only service that didn't wan't to start normally. So now that the system is booted I've found that it doesn't want to stay in sync with the NTP master and after 48 hours is refusing any SSH other than root. NTPD: this service starts normally and gets a lock on 4 servers. Almost immediately it starts to lose ground and now (after 3 days) is almost 40 hours behind. If I stop/start the service it gets the lock, resets the system clock and starts losing ground again. The 'hwclock' is set properly and maintains its time. Login: when I (re)start the ntp server I am able to login normally. I assume this problem is due to losing sync with LDAP. This appears to be verified by LDAP errors in /var/log/messages. Suggestions on where to look? ADDENDA: Tried deleting the 'drift' file. After a bit it gets recreated with 0.000. from /var/log/messages: Jan 17 06:54:01 aeolus ntpdate[5084]: step time server 129.95.96.10 offset 30.139216 sec Jan 17 06:54:01 aeolus ntpd[5086]: ntpd [email protected] Tue Oct 25 12:54:17 UTC 2011 (1) Jan 17 06:54:01 aeolus ntpd[5087]: precision = 1.000 usec Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, 0.0.0.0#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, ::#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, ::1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, fe80::213:72ff:fe20:4080#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, 127.0.0.1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, 10.127.24.81#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: kernel time sync status 0040 Jan 17 06:54:02 aeolus ntpd[5087]: frequency initialized 0.000 PPM from /var/lib/ntp/drift Jan 17 06:54:02 aeolus ntpd[5087]: system event 'event_restart' (0x01) status 'sync_alarm, sync_unspec, 1 event, event_unspec' (0xc010) You can see the 30 second offset. This was after about one minute of operation.

    Read the article

  • Video memory buswidth vs video memory Bandwidth

    - by Mixxiphoid
    My current video card (9600GT) is dying and I'm searching for a new video card. Between acquiring my current one and now, I got a lot more knowledge about hardware and I want to use that to pick my new card. So I decided to not just buy some popular card blindly, but to search for a card able to handle my hardware requirements. I searched the specs at the NVidia site for the GT640 and was confused by the memory section and some questions raised. My current card's memory bus width is 256bit and has 1GB of memory. I checked Google about the importance of bus width. And all the links basically said the same 'The higher the number the more potential simultaneously traffic can be transferred'. This was already clear to me, yet there are currently a lot of new cards which are considered better than my current one with a lower bus width. To go in more detail about my question I copied the memory info from the NVidia site: GT 640 GT640 GDDR5 Memory Specs: Memory Clock 1.8 Gbps 5.0 Gbps Standard Memory Config 2048 MB 1024 MB Memory Interface DDR3 GDDR5 Memory Interface Width 128-bit 64-bit Memory Bandwidth (GB/sec) 28.5 40.0 What puzzled me is that the Memory Bandwidth seems to me the most important part, yet the lower bus width has the higher 'performance'. Is this due to the fact the memory interface is GDDR5 and is therefore able to have a higher memory clock speed (5Gbps)? If I am to buy a new video card, should I check the bus width? Memory clock? Bandwith? Amount of memory? My current card ahs 1GB memory, so I was searching for a 2GB memory card, but now I'm not so sure any more whether that is really 'better'. My main question: To me it seems that memory performance is made up by the combination of bus width and frequency. Is this true? If yes, why are there so many sites telling me I need to get a card with a high bus width? If no, then what IS important when it goes about memory performance on a video card. NOTE: The memory bandwidth is (almost) never displayed on vendor sites. How can I determine which card is better without knowing the bandwith?

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Windows Server 2003 W3SVC Failing, Brute Force attack possibly the cause

    - by Roaders
    This week my website has disappeared twice for no apparent reason. I logged onto my server (Windows Server 2003 Service Pack 2) and restarted the World Web Publishing service, website still down. I tried restarting a few other services like DNS and Cold Fusion and the website was still down. In the end I restarted the server and the website reappeared. Last night the website went down again. This time I logged on and looked at the event log. SCARY STUFF! There were hundreds of these: Event Type: Information Event Source: TermService Event Category: None Event ID: 1012 Date: 30/01/2012 Time: 15:25:12 User: N/A Computer: SERVER51338 Description: Remote session from client name a exceeded the maximum allowed failed logon attempts. The session was forcibly terminated. At a frequency of around 3 -5 a minute. At about the time my website died there was one of these: Event Type: Information Event Source: W3SVC Event Category: None Event ID: 1074 Date: 30/01/2012 Time: 19:36:14 User: N/A Computer: SERVER51338 Description: A worker process with process id of '6308' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit. Which is obviously what killed the web service. There were then a few of these: Event Type: Error Event Source: TermDD Event Category: None Event ID: 50 Date: 30/01/2012 Time: 20:32:51 User: N/A Computer: SERVER51338 Description: The RDP protocol component "DATA ENCRYPTION" detected an error in the protocol stream and has disconnected the client. Data: 0000: 00 00 04 00 02 00 52 00 ......R. 0008: 00 00 00 00 32 00 0a c0 ....2..À 0010: 00 00 00 00 32 00 0a c0 ....2..À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 92 01 00 00 ... With no more of the first error type. I am concerned that someone is trying to brute force their way into my server. I have disabled all the accounts apart from the IIS ones and Administrator (which I have renamed). I have also changed the password to an even more secure one. I don't know why this brute force attack caused the webservice to stop and I don't know why restarting the service didn't fix the problem. What should I do to make sure my server is secure and what should I do to make sure the webserver doesn't go down any more? Thanks.

    Read the article

  • Linux per-process resource limits - a deep Red Hat Mystery

    - by BobBanana
    I have my own multithreaded C program which scales in speed smoothly with the number of CPU cores.. I can run it with 1, 2, 3, etc threads and get linear speedup.. up to about 5.5x speed on a 6-core CPU on a Ubuntu Linux box. I had an opportunity to run the program on a very high end Sunfire x4450 with 4 quad-core Xeon processors, running Red Hat Enterprise Linux. I was eagerly anticipating seeing how fast the 16 cores could run my program with 16 threads.. But it runs at the same speed as just TWO threads! Much hair-pulling and debugging later, I see that my program really is creating all the threads, they really are running simultaneously, but the threads themselves are slower than they should be. 2 threads runs about 1.7x faster than 1, but 3, 4, 8, 10, 16 threads all run at just net 1.9x! I can see all the threads are running (not stalled or sleeping), they're just slow. To check that the HARDWARE wasn't at fault, I ran SIXTEEN copies of my program independently, simultaneously. They all ran at full speed. There really are 16 cores and they really do run at full speed and there really is enough RAM (in fact this machine has 64GB, and I only use 1GB per process). So, my question is if there's some OPERATING SYSTEM explanation, perhaps some per-process resource limit which automatically scales back thread scheduling to keep one process from hogging the machine. Clues are: My program does not access the disk or network. It's CPU limited. Its speed scales linearly on a single CPU box in Ubuntu Linux with a hexacore i7 for 1-6 threads. 6 threads is effectively 6x speedup. My program never runs faster than 2x speedup on this 16 core Sunfire Xeon box, for any number of threads from 2-16. Running 16 copies of my program single threaded runs perfectly, all 16 running at once at full speed. top shows 1600% of CPUs allocated. /proc/cpuinfo shows all 16 cores running at full 2.9GHz speed (not low frequency idle speed of 1.6GHz) There's 48GB of RAM free, it is not swapping. What's happening? Is there some process CPU limit policy? How could I measure it if so? What else could explain this behavior? Thanks for your ideas to solve this, the Great Xeon Slowdown Mystery of 2010!

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • WiFi signal is lost every 3 minutes

    - by Software Monkey
    For several weeks now my Android phone has been losing it's WiFi signal momentarily, typically at about 3 minute intervals (about 3 minutes, 1.5 seconds) and occasionally at some longer interval that always seems to be just over 3 minutes. This causes an interruption of several seconds while the WiFi connection is re-established and typically fails any kind of download/streaming that is happening, makes web sites "unreachable" and generally makes the phone unusable as a data device due to the frequency. The signal remains down for about a second, but the phone takes a few more seconds to reconnect to the router. This happens regardless of proximity to the router, which otherwise show a very strong signal - usually -40 to -30 dBm or better in the same room, nowhere less than 060 dBm anywhere in the house. Changing channels does not help (I've tried 1, 4, 8 and 9). Nor does turning off the router's guest access. Nor does turning off the 5.0 GHz band. Monitoring the signal on my phone with WiFi analyzer, shows all WiFi signals on all channels drop to nothing when the WiFi connection is lost (there are two other networks on different channels which are strong enough to be relevant, with about 6 others constantly fading in and out). WiFi analyzer shows 3 separate signals for my router, the main 2.4 GHz, the guest 2.4 GHz and the 5.0 GHz. Using WiFi Analyzer on my wife's phone side-by-side shows no change in signal when my phone drops, nor does her phone drop. Monitoring the signal using our laptop, side-by-side likewise shows no signal loss and likewise the laptop does not lose it's WiFi connection. But, at work, the phone seems to not exhibit the same behavior, or, if it does, it's very occasional. Monitoring it all day at work I only saw the signal drop 3 or 4 times. The signal strength of the various networks there is comparatively weak. AT&T were super helpful: "Sorry, we can't help you with WiFi problems. You could try doing a factory reset on your phone". </sarcasm The router is relatively new, but has been working fine with this phone since last Dec. Phone : Motorola Atrix (about 8 months old). Router : Belkin N750 DB (F9K1103 v1 (01C)). Router Firmware: 1.00.46 (2011/10/28 6:37:11). Security : WPA/WPA2-Personal (PSK)

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >