Search Results

Search found 63811 results on 2553 pages for 'compile time weaving'.

Page 90/2553 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How can I compile SM 3.0 effects in D3D11 in slimdx?

    - by jacker
    var bytecode = ShaderBytecode.CompileFromFile("shaders\\testShader.fx", "fx_5_0", ShaderFlags.None, SlimDX.D3DCompiler.EffectFlags.None, null, null, out str); var effect = new SlimDX.Direct3D11.Effect(gpu.Device, bytecode); Works fine but if I try to use another shader model like 4.0 or 3.0 it throws an error on the new effect creation: E_FAIL: An undetermined error occurred (-2147467259) How do I compile older shaders? And I've read about device context but I can't find any information on how to use them to maintain DX9 compatibility.

    Read the article

  • How does Python compile some its code in C?

    - by Howcan
    I read that some constructs of Python are more efficient because they are compiled in C. https://wiki.python.org/moin/PythonSpeed/PerformanceTips Some of the examples used were map() and filter(). I was wondering how Python is able to do this? It's generally interpreted, so how does some of the code get compiled while another is interpreted - and in a different language? Why not just compile the whole thing?

    Read the article

  • How do I compile & install the newest version of Transmission?

    - by Codemonkey
    I'm trying to install Transmission 2.51 on Ubuntu 10.04. Compiling the source goes fine, but I can't seem to get it to compile the GUI as well. This is the configure output: Configuration: Source code location: . Compiler: g++ Build libtransmission: yes * optimized for low-resource systems: no * µTP enabled: yes Build Command-Line client: yes Build GTK+ client: no (GTK+ none) * libappindicator for an Ubuntu-style tray: no Build Daemon: yes Build Mac client: no How do I get it to build the GTK+ client?

    Read the article

  • Reading a Serial Port - Ignore portion of data written to serial port for certain time

    - by farmerjoe
    I would like to read data coming and Arduino on a serial port on intervals. So essentially something like Take a reading Wait Take a reading Wait Take ... etc. The problem I am facing is that the port will buffer its information so as soon as I call a wait function the data on the serial port will start buffering. Once the wait function finishes I try and read the data again but I am reading from the beginning of the buffer and the data is not current anymore, but instead is the reading taken at roughly the time the wait function began. My question is whether there is a way that I am unaware of to ignore the portion of data read in during that wait period and only read what is currently being delivered on the serial port? I have this something analogous to this so far: import serial s = serial.Serial(path_to_my_serial_port,9600) while True: print s.readline() time.sleep(.5) For explanation purposes I have the Arduino outputting the time since it began its loop. By the python code, the time of each call should be a half second apart. By the serial output the time is incrementing in less than a millisecond. These values do not change regardless of the sleep timing. Sample output: 504 504 504 504 505 505 505 ... As an idea of my end goal, I would like to measure the value of the port, wait a time delay, see what the value is then, wait again, see what the value is then, wait again. I am currently using Python for this but am open to other languages.

    Read the article

  • Measuring execution time of selected loops

    - by user95281
    I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported. The basic block counting feature of gprof depends on a feature in older compilers thats not supported now. I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex: for (i = 0; i < 1000; ++i) { for (j = 0; j < N; ++j) { //do some work here } } Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.

    Read the article

  • Time complexity to fill hash table (homework)?

    - by Heathcliff
    This is a homework question, but I think there's something missing from it. It asks: Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum. And then Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table. If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I suppossed to know how long it will take for that hash function to fill the table? Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed? As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct? Thanks

    Read the article

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • PHP date returning wrong time

    - by gargantaun
    The following script is returning the wrong time after I call date_default_timezone_set("UTC") <?PHP $timestamp = time(); echo "<p>Timestamp: $timestamp</p>"; // This returns the correct time echo "<p>". date("Y-m-d H:i:s", $timestamp) ."</p>"; echo "<p>Now I call 'date_default_timezone_set(\"UTC\")' and echo out the same timestamp.</p>"; echo "Set timezone = " . date_default_timezone_set("UTC"); // This returns a time 5 hours in the past echo "<p>". date("Y-m-d H:i:s", $timestamp) ."</p>"; ?> The timezone on the server is BST. So what should happen is that the second call to 'date' should return a time 1 hour behind the first call. It's actually returning a time 5 hours behind the first one. I should note that the server was originally set up with the EDT timezone (UTC -4). That was changed to BST (UTC +1) and the server was restarted. I can't figure out if this is a PHP problem or a problem with the server.

    Read the article

  • How to make freelance clients understand the costs of developing and maintaining mature products?

    - by John
    I have a freelance web application project where the client requests new features every two weeks or so. I am unable to anticipate the requirements of upcoming features. So when the client requests a new feature, one of several things may happen: I implement the feature with ease because it is compatible with the existing platform I implement the feature with difficulty because I have to rewrite a significant portion of the platform's foundation Client withdraws request because it costs too much to implement against existing platform At the beginning of the project, for about six months, all feature requests fell under category 1) because the system was small and agile. But for the past six months, most feature implementation fell under category 2). The system is mature, forcing me to refactor and test everytime I want to add new modules. Additionally, I find myself breaking things that use to work, and fixing it (I don't get paid for this). The client is starting to express frustration at the time and cost for me to implement new features. To them, many of the feature requests are of the same scale as the features they requested six months ago. For example, a client would ask, "If it took you 1 week to build a ticketing system last year, why does it take you 1 month to build an event registration system today? An event registration system is much simpler than a ticketing system. It should only take you 1 week!" Because of this scenario, I fear feature requests will soon land in category 3). In fact, I'm already eating a lot of the cost myself because I volunteer many hours to support the project. The client is often shocked when I tell him honestly the time it takes to do something. The client always compares my estimates against the early months of a project. I don't think they're prepared for what it really costs to develop, maintain and support a mature web application. When working on a salary for a full time company, managers were more receptive of my estimates and even encouraged me to pad my numbers to prepare for the unexpected. Is there a way to condition my clients to think the same way? Can anyone offer advice on how I can continue to work on this web project without eating too much of the cost myself? Additional info - I've only been freelancing full time for 1 year. I don't yet have the high end clients, but I'm slowly getting there. I'm getting better quality clients as time goes by.

    Read the article

  • Don&rsquo;t Miss &ldquo;Transform Field Service Delivery with Oracle Real-Time Scheduler&rdquo;

    - by ruth.donohue
    Field resources are an expensive element in the service equation. Maximizing the scheduling and routing of these resources is critical in reducing costs, increasing profitability, and improving the customer experience. Oracle Real-Time Scheduler creates cost-optimized plans and schedules for service technicians that increase operational efficiencies and improve margins. It enhances Oracle’s Siebel Field Service with real-time scheduling and dispatch capabilities that ensure service requests are allocated efficiently and service levels are honored. Join our live Webcast to learn how your organization can leverage Oracle Real-Time Scheduler to: Increase operational efficiency with real-time scheduling that enables field service technicians to handle more calls per day and reduce travel mileage Resolve issues faster with dynamic work flows that ensure you have the right technician with the right skill set for the right job Improve the customer experience with real-time planning that optimizes field technician routing, reduces customer wait times, and minimizes missed SLAs Date: Thursday, March 10, 2011 Time: 8:30 am PT / 11:30 am ET / 4:30 pm UK / 5:30 pm CET Click here to register now.   Technorati Tags: Siebel Field Service,Oracle Real-Time Scheduler

    Read the article

  • What time function do I need to use with pthread_cond_timedwait?

    - by Vincent
    The pthread_cond_timedwait function needs an absolute time in a time timespec structure. What time function I'm suppose to use to obtain the absolute time. I saw a lot of example on the web and I found almost all time function used. (ftime, clock, gettimeofday, clock_gettime (with all possible CLOCK_...). The pthread_cond_timedwait uses an absolute time. Will this waiting time affected by changing the time of the machine? Also if I get the absolute time with one of the time function, if the time of the machine change between the get and the addition of the delta time this will affect the affect the wait time? Is there a possibility to wait for an event with a relative time instead?

    Read the article

  • Getting a nicely formatted timestamp without lots of overhead?

    - by Brad Hein
    In my app I have a textView which contains real-time messages from my app, as things happen, messages get printed to this text box. Each message is time-stamped with HH:MM:SS. Up to now, I had also been chasing what seemed to be a memory leak, but as it turns out, it's just my time-stamp formatting method (see below), It apparently produces thousands of objects that later get gc'd. For 1-10 messages per second, I was seeing 500k-2MB of garbage collected every second by the GC while this method was in place. After removing it, no more garbage problem (its back to a nice interval of about 30 seconds, and only a few k of junk typically) So I'm looking for a new, more lightweight method for producing a HH:MM:SS timestamp string :) Old code: /** * Returns a string containing the current time stamp. * @return - a string. */ public static String currentTimeStamp() { String ret = ""; Date d = new Date(); SimpleDateFormat timeStampFormatter = new SimpleDateFormat("hh:mm:ss"); ret = timeStampFormatter.format(d); return ret; }

    Read the article

  • svchost consuming more than 50% CPU all the time in windows 7

    - by claws
    Hello, I'm using windows 7 ultimate. svchost containing DCOM Server Process Launcher Plug and Play Power services is consuming more than 50% of CPU for most of the time. I found this blog post: http://blog.hansmelis.be/2007/06/17/windows-vista-long-delay-when-switching-songs-in-media-player/ That process is associated with two services: DCOM Server Process Launcher and Plug and Play. For the Vulcans among us, all logic stops there for a second. What do those two services have to do with WMP? The answer is provided by Vista's new audio engine. The new engine supports several audio "enhancements". But for the enhancements to work, the engine needs to determine if your hardware is up to the task. And when does it check that? Each time a sound output device is accessed. That's pretty nice if you can do a hot swap of sound hardware, but I don't see me doing that anytime soon. Anyways, it does provide us with the link to the correct service because checking hardware is done by the "Plug and Play" service. One might think that deactivating each enhancement would solve the problem, but that's wishful thinking. The configuration of the enhancements is located in the properties of the sound hardware. When opening the tab, I found out that no enhancements were active. Hmmm... so why does it check the hardware? Well, it does that in case you actually enable an enhancement. To completely stop the hardware checking, you have to tick the box labelled Disable all enhancements. As soon as you do that, Vista finally understands you don't want to use them buts thats for vista. Is it the same case with windows 7 too? and I couldn't find any "Disable all enhancements" in my controlpanelsounds (mmsys.cpl). Where can I find this option in windows 7? How to solve this?

    Read the article

  • time sync with ntpd

    - by guthrie
    I run Debian on several systems, and their times do not seem to stay in sync. I can run ntpdate manually, but I thought that I should have an ntpd running that would automate that. I did check with apt and apt-cache but don't find any ntpd (or associated ntpq), not any such names in my system (locate...), but ntp-doc does still describe them. Looking around I see that there is an ntpdate-debian command, and it uses /etc/default/ntpdate for servers (instead of the standard /etc/ntp.conf), but even thought that file is there and has "yes" indicated to use ntp.conf, it fails with "no servers can be used", although ntpdate works fine. Is this just a layer over ntpdate, any reason to use it instead? So, why are they missing, do I need them, how do I automate time updates? Associated, two of my machines are virtualized on a MSoft VM, how is it that their clocks drift, and both to different values? (The underlying Windows machine clock seems stable). I see a few old notes about time & ntp problems on VMware, didn't find anything either current or relating to MSoft VMs. Anything I did see says just to use ntpd, but as above, ...?!

    Read the article

  • Apache LDAP auth: denied all time

    - by Dmytro
    There is my config (httpd 2.4): <AuthnProviderAlias ldap zzzldap> LDAPReferrals Off AuthLDAPURL "ldaps://ldap.zzz.com:636/o=zzz.com?uid?sub?(objectClass=*)" AuthLDAPBindDN "uid=zzz,ou=Applications,o=zzz.com" AuthLDAPBindPassword "zzz" </AuthnProviderAlias> <Location /svn> DAV svn SVNParentPath /DATA/svn AuthType Basic AuthName "Subversion repositories" SSLRequireSSL AuthBasicProvider zzzldap <RequireAll> Require valid-user Require ldap-attribute employeeNumber=12345 Require ldap-group cn=yyy,ou=Groups,o=zzz.com </RequireAll> </Location> The Require valid-user is work. But ldap-attribite, ldap-filter, ldap-group does not work - denied in logs all time. I spent a lot of time but can't understand what's going on. This is the example of my logs: [Tue Sep 25 16:42:26.772006 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require valid-user : granted [Tue Sep 25 16:42:26.772014 2012] [authz_core:debug] [pid 23087:tid 139684003014400] mod_authz_core.c(802): [client 1.1.1.1:52624] AH01626: authorization result of Require ldap-attribute employeeNumber=12345: denied I checked all info with ldapsearch: there is a valid username, employee ID and other...

    Read the article

  • It takes a long time until Windows XP recognize I connected USB drive

    - by Pavol G
    I have a problem with my new USB disk. When I connect it to my laptop with Windows XP SP2 it takes about 4-5min until Windows recognizes it and shows it as a new disk. I can also see (disk's LED is blinking) that something is scanning the disk when I connect it; when this is done Windows immediately recognize it. Also when I'm copying data to this disk the speed is about 3.5MB/sec. It's connected using USB2.0. I tried to check for spyware (using Spybot), also tried running Windows in safe mode. But still have the same problems. Do you have any idea what could help to solve this problem? On Windows Vista (another laptop) everything is ok, disk loads in about 15sec and speed is about 20-30MB/sec. Edit: I tried to update to SP3 - no change Edit2: When this "strange" scanning occurs I can see that DPCs process is taking about 50% of CPU. When the scan ends (after 5min) this process take 0% again. Edit3: About the scan time, currently it's taking about 5min, but this time is growing as I'm adding more data to the disk, currently its about 40GB and I don't want to see how long it will take with 1000GB. Thanks a lot for every advice!

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • Long access time for static web page on virtual machine

    - by Karol
    My setup Windows 7 on workstation that I use at work (with domain) and home (no domain) Virtual machine (VMWare) that runs Arch Linux (I will call it just "Linux") with network interface in bridged mode. Linux serves web pages with Nginx. IP address of Linux machine is 192.168.0.16 and is added to C:\windows\system32\drivers\etc\hosts: 192.168.0.16 bridged bri IP address of Windows workstation is added to /etc/hosts: 192.168.0.10 workstation I can add more details to my setup description (I am not sure what is relevant). The question Often (but not always) it takes long time for a web browser (Firefox) to open static web page served by Linux. I am sure it is not a performance issue. To be more specific: it takes about ~20 seconds to resolve(?) the address http://bridged for a web browser. Additionally I have just installed samba service and noticed similar problem, so it is not specific to browser & http. Initial access for samba shares also takes long time.

    Read the article

  • How to use multiple variables in time calculator c#

    - by Peter O'Dwyer
    I am building a video time calculator and need to use the number of frames instead of milliseconds. e.g 25FPS HH:MM:SS:FR 00:00:10:24 <-- Last frame of 10 seconds 00:00:11:00 <-- First frame of 11 seconds My problem is that if the start frames are lower than my end frames it can't calculate the time. 00:00:10:05 <-- Start 01:00:10:10 <-- End 00:59:59:05 <-- Answer which I can't get!! HERE'S MY CODE string Startdate = Dur_txtStart.Text; string Enddate = Dur_txtEnd.Text; string Startdate1 = Startdate.Substring(0, 8); int Startframe1 = Convert.ToInt32(Startdate.Substring(9, 2)); string Startdate2 = Enddate.Substring(0, 8); int Startframe2= Convert.ToInt32(Enddate.Substring(9, 2)); TimeSpan diff = DateTime.Parse(Startdate2).Subtract(DateTime.Parse(Startdate1)); int frameDiff = Startframe2 - Startframe1; int Hour = Convert.ToInt32(diff.Hours); int Min = Convert.ToInt32(diff.Minutes); int Sec = Convert.ToInt32(diff.Seconds); if (frameDiff < 0 && Sec > 0) { Sec -= 1; string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), (Convert.ToInt32(Dur_txtFPS.Text + frameDiff)).ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } else { string frameDiffBuild = string.Format("{0}:{1}:{2}:{3}",Hour.ToString("D2"), Min.ToString("D2"), Sec.ToString("D2"), frameDiff.ToString("D2")); Dur_txtOutTime.Text = frameDiffBuild; } Hope you can help guys my head is mashed!

    Read the article

  • Windows 7 hangs after going into sleep a second time

    - by Brian Stephenson
    I've searched everywhere around Google and can't figure out why this is happening so I decide to ask here to see if anyone has a problem like this. Like it says in the title, whenever I sleep ONCE I'm able to wake the system, but going back to sleep again AFTER waking up for the first time results in it hanging on no input and no output, with the fan spinning as fast as possible and alot of heat being spewed out by the fan as well. I've tried various things like setting all USB Hub Root's to not get switched off for power saving, disabling USB selective suspend, disabling PCI-e link state power management, and even unplugging ALL USB devices and it wont wake up after the second attempt. And I've even waited up to a full hour of the CPU fan spinning loudly and it's still stuck trying to wake up. The only USB devices I use are a Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro) and a generic HID compliant mouse from Creative model number OMC90S "CREATIVE MOUSE OPTICAL LITE". My other devices like external drives and controllers are unplugged when I'm not using them as having too many USB devices plugged in at a time causes a deadlock on almost all of the ports I have. Here's my system specifications (Most of these are from CPU-Z): Brand: Gateway DX4300-19 Mainboard: Gateway RS780 Chipset: AMD 780G Rev 00 Southbridge: AMD SB700 Rev 00 LPCIO: ITE IT8718 BIOS: American Megatrends Inc. ver P01-A4 09/15/2009 CPU: AMD Phenom II X4 810 at 2.60 GHz RAM: 8.0 GB DDR2 Dual Channel Ganged Mode at 400 MHz GPU: ATI Radeon HD3200 Graphics Intergrated - RS780 OS: Windows 7 Home Premium x64 OEM (Acer Group) HDD: WDC WD10EADS-22M2B0 1.0 TB (Western Digital Green Caviar) My BIOS has absolutely no control over how I setup the sleep mode to be either S1 or S3. So I can't check these settings or even change them. Hybrid sleep is also disabled, I can successfully go into hibernation and wake from hibernation but this is painfully slow due to a harddrive problem I'm having with this "Green Drive". (Hibernation takes over ~3 minutes to complete) Any help would be appreciated, thanks.

    Read the article

  • Percona-server time out on /etc/init.d/mysql start

    - by geekmenot
    Every time I start mysql, using /etc/init.d/mysql start or service mysql start, it always times out. * Starting MySQL (Percona Server) database server mysqld [fail] However, I can get into mysql. Just wanted to know if there is a problem with the install because it happens all the time, not a one off error. mysql-error.log shows: 121214 11:25:56 mysqld_safe Starting mysqld daemon with databases from /data/mysql/ 121214 11:25:56 [Note] Plugin 'FEDERATED' is disabled. 121214 11:25:56 InnoDB: The InnoDB memory heap is disabled 121214 11:25:56 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121214 11:25:56 InnoDB: Compressed tables use zlib 1.2.3 121214 11:25:56 InnoDB: Using Linux native AIO 121214 11:25:56 InnoDB: Initializing buffer pool, size = 14.0G 121214 11:25:58 InnoDB: Completed initialization of buffer pool 121214 11:26:01 InnoDB: Waiting for the background threads to start 121214 11:26:02 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.2 started; log sequence number 9333955393950 121214 11:26:02 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121214 11:26:02 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121214 11:26:02 [Note] Server socket created on IP: '0.0.0.0'. 121214 11:26:02 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.005163' at position 624540946, relay log '/data/mysql/mysql-relay-bin.000043' position: 624541092 121214 11:26:02 [Note] Slave I/O thread: connected to master '[email protected]:3306',replication started in log 'mysql-bin.005180' at position 823447620 121214 11:26:02 [Note] Event Scheduler: Loaded 0 events 121214 11:26:02 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.28-29.2-log' socket: '/data/mysql/mysql.sock' port: 3306 Percona Server (GPL), Release 29.2

    Read the article

  • Unable to connect to server after a certain amount of time

    - by Troy
    I am a business FIOS subscriber with 5 static IPs. I have the following network setup: Verizon provided ONT Dlink switch Dell server running Ubuntu 12.04 with iptables enabled and a static IP address. The makes/models of hardware are: FIOS ONT Alcatel-Lucent I-211M-H ONT D-Link D-Link Web Smart Switch DES-1228P Server Dell Optiplex 755 (Ubuntu 12.04 Server) I have iptables running on the server with http, https and ssh ports open. I can connect to a website on the server from an external computer, but after a certain amount of time (mins to hours), I can no longer connect. All I have to do to re-enable connectivity is connect to the server via SSH from a computer INSIDE the network. I don't have to actually login, I just have to establish a connection. I can then access the website externally again. I did some googling and it seems some of verizon's equipment had an ARP bug where the ARP entries would expire after a certain time period, but those issues all seem to be from back in 2009 - 2010. I know the switch has an 'auto learning Mac address' feature, but I'm not sure if that could be the problem or not. Does anyone have any ideas or advice on how I can troubleshoot this? Thanks!

    Read the article

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • Perfmon % Processor Time vs. task manager's CPU usage

    - by nat
    I'm new to using Perfmon and performance monitoring in general (so go easy on me please ;) I know that Perfmon doesn't have anything exactly like Task Manager's CPU usage display, but I'm trying to figure out how to monitor user's CPU usage via Perfmon in a similar way, and trying to understand the measurements (or how to convert the numbers to get a similar understanding) For example, if in Task Manager, a particular user is consistently using more than 5% CPU, I would want to contact the user about it. I learn best by example, so here is exactly what I'm trying to do, with a specific example: This is for a 32-bit Dual Quad Core Windows 2003 web server (8 CPUs), there are many web sites on the server, each running within their own application pool/worker process ID. Through other research here I learned of a registry change that I made so that the PID shows up with the w3wp process so I can easily identify the site later by cross-referencing it. I set up a counter with the following settings: Process -> % Processor Time -> all instances Here is an example. Say I'm interested in "black line" user in this graph below, as his process is spiking quite high compared to all the other users: (I wasn't allowed to post the image as I'm a new user on this site.. I've uploaded the image to:) http://i35.tinypic.com/106yn8k.jpg So... using this as an example, I see that they have an AVERAGE % PROCESSOR TIME of 23.264 , and have spiked as high as 103.124 So what exactly does this 23.264 number mean to me? Is it similar to an average of Task Manager's CPU reading for this user? Or, since this server has 8 CPUs, should I divide this number by 8? (23.264/8 = 2.9% AVERAGE CPU LOAD?) Thanks in advance.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >