Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 308/996 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • App-V Problems After SCCM 2007 SP2 Upgrade

    - by GAThrawn
    We're running an SCCM (MS System Centre Config Manager, successor to SMS) 2007 environment and delivering a number of applications to clients virtualized using App-V 4.5.1. The App-V apps are delivered by SCCM in Download-and-Execute, not streaming mode. The SCCM environment was recently service packed to SCCM 2007 SP2 (amongst other things this gives Win7 support). We also pushed out the updated SCCM clients to our workstations. This seems to have broken file associations for virtual apps for a large number of our users. The users can still open their App-V apps by finding the specific app on their Start Menu and clicking it's icon, but double-clicking an associated file in Explorer, or opening an email attachment gives the "This action is only valid for installed applications" error. There is a Technet blog entry from the App-V team talking about this issue "Upgrade to ConfigMgr 2007 SP2 may break App-V File Type Associations" but running the script there comes back saying "The User Interface option has been updated" but hasn't seemed to fix the problems for any of our users. Unfortunately the Technet blogs don't seem to have comments switched on so you can't see how this has worked out for other people. Anyone else had this problem, have you found any other way to fix it?

    Read the article

  • Transpose matrix-style table to 3 columns in Excel

    - by polarbear2k
    I have a matrix-style table in excel where B1:Z1 are column headings and A2:A99 are row headings. I would like to convert this table to a 3 column table (column heading, row heading, cell value). It does not matter in what order the new table is. A B C D A B C A B C 1 H1 H2 H3 1 H1 R1 V1 1 H1 R1 V1 2 R1 V1 V2 V3 => 2 H1 R2 V4 or 2 H2 R1 V2 3 R2 V4 V5 V6 3 H1 R3 V7 3 H3 R1 V3 4 R3 V7 V8 V9 4 H2 R1 V2 4 H1 R2 V4 5 H2 R2 V5 5 H2 R2 V5 6 H2 R3 V8 6 H3 R2 V6 7 H3 R1 V3 7 H1 R3 V7 8 H3 R2 V6 8 H2 R3 V8 9 H3 R3 V9 9 H3 R3 V8 I've been playing around with the OFFSET function to create the whole table but I feel like a combination of TRANSPOSE and V/HLOOKUP is required. Thanks EDIT I have managed to come up with the correct formulas. If the data is in Sheet1 like in my example above, the formulas go in Sheet2: [A1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99), OFFSET(Sheet1!$A$1,0,IF(MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1))=0,COUNTA(Sheet1!$B$1:$Z$1),MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1)))),"") [B1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99),OFFSET(Sheet1!$A$1,IF(MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))=0,COUNTA(Sheet1!$A$2:$A$99),MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))),0),"") [C1] =IF(ROW() <= COUNTA(Sheet1!$B$1:$Z$1)*COUNTA(Sheet1!$A$2:$A$99),OFFSET(Sheet1!$A$1,IF(MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))=0,COUNTA(Sheet1!$A$2:$A$99),MOD(ROW(),COUNTA(Sheet1!$A$2:$A$99))),IF(MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1))=0,COUNTA(Sheet1!$B$1:$Z$1),MOD(ROW(),COUNTA(Sheet1!$B$1:$Z$1)))),"") The formulas are limited to B1:Z1 for the headings and A2:A99 for the rows (these can be increased to their maximums if required). The COUNTA() formula returns the number of cells that actually have values, which limits the number of rows returned to headings*rows. Otherwise the formulas would could go on for infinity because of the MOD function.

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Windows errors, how do I find root cause and fix it? Getting several errors

    - by Eric Martin
    My server is having issues and not responding to customer's https requests. I checked the event viewer and found several errors. These two are listed a couple of times: WINS encountered a database error. This may or may not be a serious error. WINS will try to recover from it. You can check the database error events under 'Application Log' category of the Event Viewer for the Exchange Component, ESENT, source to find out more details about database errors. If you continue to see a large number of these errors consistently over time (a span of few hours), you may want to restore the WINS database from a backup. The error number is in the second DWORD of the data section. And this one: An error occured while using SSL configuration for socket address 0.0.0.0:444. The error status code is contained within the returned data. SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again. [CLIENT: xxx.xxx.xxxx.xxx] I also found this in the event viewer but the computer has been restarted since this message and I have not seen it again. Configuring the Page file for crash dump failed. Make sure there is a page file on the boot partition and that is large enough to contain all physical memory. This is my virtual memory settings: I'm not familiar with WINS so I wasn't sure if that is where I start or how to resolve it. Is the WINS error causing the other problems or should I be looking somewhere else?

    Read the article

  • Windows Vista: Folder options not working

    - by Thomas Matthews
    I'm running Windows Vista Service Pack 2. I have the Folder Options set to "Open each folder in the same window." (Organize | Folder Options | General). Each time I open a folder, it opens in a new window. How do I get it to open each folder in the same window? I have tried the following techniques with no success: 1. Expand number of MBAG entries in Registry. (The number is 40,000 now.) 2. Delete Bags and MBAG entries in Registry. (Rebooted machine after, still no success). 3. Change to "Open each folder in its own window". Saved, then changed back. 4. Under View tab, changed "Remember each folder's view settings": unchecked, Apply to Folders, checked, Apply to folders. 5. Applied application from Annoyances.org. Still no success. 6. Clicked on Reset Default Options, then OK. (Opening in same fold is a default option!) Still unsucessful. I want the folders to open in the same window, just like the options say.

    Read the article

  • nginx server over https using up all available file handles

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it?

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Iptables: "-p udp --state ESTABLISHED"

    - by chris_l
    Hi, let's look at these two iptables rules which are often used to allow outgoing DNS: iptables -A OUTPUT -p udp --sport 1024:65535 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -p udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT My question is: How exactly should I understand the ESTABLISHED state in UDP? UDP is stateless. Here is my intuition - I'd like to know, if or where this is incorrect: The man page tells me this: state This module, when combined with connection tracking, allows access to the connection tracking state for this packet. --state ... So, iptables basically remembers the port number that was used for the outgoing packet (what else could it remember for a UDP packet?), and then allows the first incoming packet that is sent back within a short timeframe? An attacker would have to guess the port number (would that really be too hard?) About avoiding conflicts: The kernel keeps track of which ports are blocked (either by other services, or by previous outgoing UDP packets), so that these ports will not be used for new outgoing DNS packets within the timeframe? (What would happen, if I accidentally tried to start a service on that port within the timeframe - would that attempt be denied/blocked?) Please find all errors in the above text :-) Thanks, Chris

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • not able to register sip user on red5server, using red5phone

    - by sunil221
    I start the red5, and then i start red5phone i try to register sip user , details i provide are username = 999999 password = **** ip = asteriskserverip and i got --- Registering contact -- sip:[email protected]:5072 the right contact could be --- sip :99999@asteriskserverip this is the log: SipUserAgent - listen -> Init... Red5SIP register [SIPUser] register RegisterAgent: Registering contact <sip:[email protected]:5072> (it expires in 3600 secs) RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout RegisterAgent: Failed Registration stop try. Red5SIP Client leaving app 1 Red5SIP Client closing client 35C1B495-E084-1651-0C40-559437CAC7E1 Release ports: sip port 5072 audio port 3002 Release port number:5072 Release port number:3002 [SIPUser] close1 [SIPUser] hangup [SIPUser] closeStreams RTMPUser stopStream [SIPUser] unregister RegisterAgent: Unregistering contact <sip:[email protected]:5072> SipUserAgent - hangup -> Init... SipUserAgent - closeMediaApplication -> Init... [SIPUser] provider.halt RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout please let me know if i am doing anything wrong. regards Sunil

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • How do I specify the emergency location in CDP?

    - by chrish
    In the LLDP-MED and Cisco Discovery Protocol whitepaper, it compares LLDP-MED and CDP. The part I am interested in is emergency location configuration. In LLDP-MED, I can specify the Emergency Line Indentification Number (ELIN) and that number will be used by some IP Phones (e.g. Aastra) when placing emergency calls. The whitepaper states: Location Identification Discovery This capability is important because it normally provides location information from the switch to the phone. (If the phone is configured with location information or can determine its location, then it may send this information to the switch. However, the real value is getting this information from the switch to the phone for phones that cannot determine their own location.) Location identification discovery allows the phone to be aware of its location-information that can be used for location-based applications on the phone. More importantly, this capability can be used to provide location information when making emergency calls. Both Cisco Discovery Protocol and LLDP-MED support the transportation of location information. However, LLDP-MED has more supported data formats than Cisco Discovery Protocol. I have found the documentation on how to set the location and associate the location to the interfaces for LLDP-MED. How is this done for CDP? Is ELIN supported for CDP?

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • Xmodmap fails to remap modifier keys

    - by ZyX
    When I try to move keys, so that I have CapsLock on escape, Control on CapsLock and Escape on left control, I get the following error: % xmodmap ~/.Xmodmap X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 15 Current serial number in output stream: 15 That is the code that fails: remove Lock = Caps_Lock ! ESC keycode 9 = Caps_Lock add Lock = Caps_Lock remove Control = Control_L ! CapsLock keycode 66 = Control_L add control = Control_L ! Control_R keycode 37 = Escape ! 2*Meta_L keycode 148 = Meta_L add mod1 = Meta_L If I comment out all lines that start with either add or remove it runs without any errors, but does not do what I want. Program versions (Gentoo x86 (stable)): xorg-server-1.7.6 xmodmap-1.0.4 xf86-input-evdev-2.3.2 Xorg.conf: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 1.0 (buildmeister@builder63) Fri Aug 14 17:54:58 PDT 2009 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Evdev Keyboard" "CoreKeyboard" InputDevice "Evdev Mouse" "CorePointer" EndSection Section "Module" Disable "dri" Disable "dri2" Disable "record" EndSection Section "InputDevice" Identifier "Evdev Keyboard" Driver "evdev" Option "Device" "/dev/input/event2" Option "CoreKeyboard" Option "AutoRepeat" "500 25" Option "XkbRules" "xorg" Option "xkb_rules" "xorg" Option "XkbModel" "yahoo" Option "xkb_model" "yahoo" Option "XkbLayout" "dvp2" # ,ru2 Option "xkb_layout" "dvp2" # ,ru2 # Option "XkbVariant" "" # ,winkeys Option "XkbOption" "grp_led:scroll,grp:rctrl_toggle,compose:rwin,grp:lwin_switch" # grp:lwin_switch EndSection Section "InputDevice" Identifier "Evdev Mouse" Driver "evdev" Option "CorePointer" Option "Device" "/dev/input/event3" Option "Name" "Genius Ergo Mouse" Option "HWHEELRelativeAxisButtons" "7 6" Option "WHEELRelativeAxizButtons" "4 5" Option "SendCoreEvents" "true" Option "Buttons" "11" EndSection Section "Files" FontPath "/usr/share/fonts/misc" FontPath "/usr/share/fonts/Type1" FontPath "/usr/share/fonts/100dpi" FontPath "/usr/share/fonts/75dpi" FontPath "/usr/share/fonts/terminus" # FontPath "/usr/share/fonts/intlfonts" FontPath "/usr/share/fonts/ttf-bitstream-vera" # FontPath "/usr/share/fonts/ttf" FontPath "/usr/share/fonts/corefonts" FontPath "/usr/share/fonts/paratype" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection Section "ServerFlags" # Option "XkbDisable" "false" # Option "AutoAddDevices" "false" Option "DontVTSwitch" "false" Option "DontZap" "false" # Option "DontZoom" "true" EndSection Everything worked before update.

    Read the article

  • Using Round Robin DNS on simple VPN setup

    - by dannymcc
    We have two internet connections which are load balanced to share the load between the two. We set this up after one of the internet provider proved to be less than reliable but great speed and latency wise when it is working. We'd rather utilise both connections as much as possible rather than leave one idle until the other drops out. We have a number of remote workers who occasionally need to connect via VPN from their laptops or iPads, we also have a small number of permanent LAN to LAN tunnels running from smaller branches. Originally we only had one internet connection and used one of our static IP addresses for all VPN users. Now that we have two internet connections running all of the time I am trying to make sure that the VPN is available to our team regardless of which connection drops. So my solution is to create two A records for our domain name with a value of vpn. and the two static IP addresses from each peer. Is this a sensible way of achieving this? Should I expect higher latency due to packets being lost if one peer fails and some packets still get routed to it anyway? A brief mockup of the setup I have:

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • How can I control disk numbering (enumeration) in Windows 7 Disk Management?

    - by tim11g
    A desktop system had two drives (Assigned C and D, which were enumerated in Disk Management as Disk 0 and Disk 1). A new SSD was added as the boot drive, after copying the C drive to the SSD. The SSD was connected to SATA 0 (master) port on the motherboard. The previous C Drive was moved to SATA 2 and is reformatted as a non-booting NTFS partition. The D drive remained on SATA 1. The system boots and everything seems fine. I was able to manually adjust the Drive Letters. However, the list in Disk Management is re-ordered. Disk 0 is the the previous Disk 2 (D Drive) on SATA 1, Disk 1 is the new Boot Drive (now C) on SATA 0, and Disk 2 is the former C Drive (now assigned E) on SATA 2. Does the Disk 0, 1, 2, designation mean anything? I would prefer to have them display in Disk Management as Drives C, D, and E from top to bottom. Is the Disk enumeration based on the SATA port or something else? (If it was based on SATA Port, they should be ordered C, D, E. Is there any way to re-order the Disk number assignments? What actually does determine the Disk number enumeration?

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • not able to register sip user on red5server, using red5phone

    - by sunil221
    I start the red5, and then i start red5phone i try to register sip user , details i provide are username = 999999 password = **** ip = asteriskserverip and i got --- Registering contact -- sip:[email protected]:5072 the right contact could be --- sip :99999@asteriskserverip this is the log: SipUserAgent - listen -> Init... Red5SIP register [SIPUser] register RegisterAgent: Registering contact <sip:[email protected]:5072> (it expires in 3600 secs) RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout RegisterAgent: Failed Registration stop try. Red5SIP Client leaving app 1 Red5SIP Client closing client 35C1B495-E084-1651-0C40-559437CAC7E1 Release ports: sip port 5072 audio port 3002 Release port number:5072 Release port number:3002 [SIPUser] close1 [SIPUser] hangup [SIPUser] closeStreams RTMPUser stopStream [SIPUser] unregister RegisterAgent: Unregistering contact <sip:[email protected]:5072> SipUserAgent - hangup -> Init... SipUserAgent - closeMediaApplication -> Init... [SIPUser] provider.halt RegisterAgent: Registration failure: No response from server. [SIPUser] SIP Registration failure Timeout please let me know if i am doing anything wrong. regards Sunil

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >