Search Results

Search found 10595 results on 424 pages for 'job definition'.

Page 275/424 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • How to write re-usable puppet definitions?

    - by Oliver Probst
    I'd like to write a puppet manifest to install and configure an application on target servers. Parts of this manifest shall be re-usable. Thus I used define for defining my re-usable functionality. Doing so, I've always the problem that there are parts of the definition which are not re-usable. A simple example is a bunch of configuration files to be created. These file must be placed in the same directory. This directory must be created only once. Example: nodes.pp node 'myNode.in.a.domain' { mymodule::addconfig {'configfile1.xml': param => 'somevalue', } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', } } mymodule.pp define mymodule::addconfig ($param) { $config_dir = "/the/directory/" #ensure that directory exits: file { $config_dir: ensure => directory, } #create the configuration file: file { $name: path => "${config_dir}/${name}" content => template('a_template.erb'), require => File[$config_dir], } } This example will fail, because now the resource file {$config_dir: is defined twice. As far as I understood, it is required to extract these parts into a class. Then it looks like this: nodes.pp node 'myNode.in.a.domain' { class { 'mymodule::createConfigurationDirectory': } mymodule::addconfig {'configfile1.xml': param => 'somevalue', require => Class ['mymodule::createConfigurationDirectory'], } mymodule::addconfig {'configfile2.xml': param => 'someothervalue', require => Class ['mymodule::createConfigurationDirectory'], } } But this makes my interface hard use. Every user of my module has to know, that there is a class which is additionally required. For this simple use case the additional class might be acceptable. But with growing module complexity (lots of definitions) I'm a bit afraid of confusing the modules user. So I'd like to know is there a better way to handle this dependencies. Ideally, classes like createConfigurationDirectory are hidden from the user of the modules api. Or are there some other "Best Practices"/Patterns handling such dependencies?

    Read the article

  • Change the number of consecutive frequent ssh login before temporary blocking the user login

    - by Kenneth
    my server currently would temporarily refuse a user to login for certain amount of time (maybe ~20min) if the user consecutively frequent ssh login for 3 times. Can I change this behaviour (say relaxed the definition of frequent maybe from 'within 5 sec' to 'within 10 sec'; or increase the # of consecutive login from 3 to 5)? Thanks. Added: Ah.. now I think the problem was not with the ssh. I just tried on another newly installed server. consecutive successful login won't block the user. I have no sudo permission on the server I mentioned above. Now I suspect this behaviour may cause by the firewall in the system. Thanks everyone's comments. ADDED 2: Ah... after some searches. I think the server is using /sbin/iptables to do it as I can see the iptables program is there even though I don't have permission to list the rules. Thanks everyone, special thank to jaume and Mark!

    Read the article

  • Audio card with built-in ground isolator?

    - by Dave Jarvis
    What audio cards would you recommend that eliminate hum, and hard-drive & mouse movement signal interference? Hardware components: Motherboard. Asus P5Q SE Audio. Realtek ALC 1200, 8-Channel High-Definition Audio CODEC (on board) Harddrive. WD Caviar 320 GB Mouse. Logitech Marbleman USB Mixer. Mackie d.4 Pro Amplifier. Sonance Sonamp 260 All components are plugged into the same Monster Power HDP 910 powerbar (does not help eliminate noise). I have no other components plugged in. The computer uses a Monster iCable 1000 to go from mini (on board audio) to RCA (mixer). I have moved the cable as far from other cables as possible. A ground loop isolator between the mixer and on board audio eliminates all noise. I would rather not use a ground loop isolator; an internal audio card that is Linux-compatible (Kubuntu) would be ideal. Suggestions?

    Read the article

  • Block protect (Keep last line of paragraph with next paragraph)

    - by Ed Cottrell
    Is there a way to force Microsoft Word 2010 to keep the last line of a paragraph with the next paragraph? An example of when this is relevant is when starting a block quote; it doesn't look good to have the block quote start at the top of a new page, particularly when it's introduced by a partial sentence, like this: "Lorem ipsum" is sample text widely used in the publishing industry, as the text has spacing roughly similar to that of English and therefore looks "normal" but unintelligible to an English reader's eye, allowing the reader to focus on design elements. It begins, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nam rhoncus laoreet risus, quis congue leo viverra congue. Suspendisse magna massa, viverra imperdiet est eu, ultrices volutpat lectus. Sed pulvinar est id risus lobortis venenatis. There shouldn't be a page break after "begins," because it looks like the sentence ends abruptly. "Keep lines together" won't work, because by definition we're talking about two paragraphs. "Keep with next" won't work if the first paragraph is larger than a couple of lines, because then you get an awkwardly large space at the bottom of a page. Manual line breaks obviously work, but only when the document is final, which is often less certain than it seems. I know WordPerfect has a feature called "block protect" that does this, but I have not found even an acceptable substitute in Word. I have played with style separators and hidden paragraph breaks, but to no avail. I would love a special character, kind of like the nonbreaking space or zero width optional space, that tells Word to move to the next page if the next paragraph would otherwise start the page. A macro would also be great, but I haven't been able to find a starting point (like how to detect where non-manual page breaks fall). Edit: It looks like "Keep with next" works this way in Word 2013, but I specifically need a fix that works in Word 2010.

    Read the article

  • ASUS EAH5450 Graphics Card (ATI Radeon HD5450 - 1 GB DDR3) on Windows 2003? Anybody got it to work?

    - by JJarava
    Hi all! I've just bought an ASUS EAH5450 Graphics Card (ATI Radeon HD5450, 1 GB DDR3) for my main system, but I haven't been able to make it work under Windows 2003 (my OS in that system). When I plugged the card, I got a couple of "installing drivers" prompt for things such as "ATI High Definition Audio Device" that got themselves sorted out of the Internet, and then a "Standard VGA Graphics Adapter". The CD that came with the card installs something called "ATI Catalyst Install Manager" and .net 2.0, but no drivers. I've downloaded the latest (WinXP 32bits) drivers from ATI, and the experience is the same: I don't get any drivers installed. My Motherboard is an ASUS A8N-SLI with nVidia nForce 4 chipset (for an Athlon 64X2, somewhat old), but my previous card was an ATi Radeon X700, so it's been working with ATI cards before. On POST, during boot I see a "Display Card" Device (Vendor ID 1002-68F9-0300) and a "Multimedia Device" (1002-AA68-0403), and when viewing the properties of the "Standard VGA", they match the device ID. Any hints? I'd really hate having to get rid of the card, and I'm sure it's not that strange what I'm trying to do...

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Better approach to archiving large amounts of original video footage using optical media (DVD/Blu-ra

    - by Rob
    This question is to share my experience as well as ask for suggestions for better methods. Along with 2 friends, I completed the making of a short documentary film in 2006. Clip is at: http://www.youtube.com/mediamotioninvision The film was edited in Adobe Premiere Pro 1.5 on Windows XP. More details and screenshot here: http://www.flickr.com/photos/smilingrobbie/1350235514/ ( note this is not intended to be a plug, we've moved on from this initial learning curve project ;) ) The film is in 4:3 standard definition 720x576 PAL format. As well as retaining the final 30minute film, I wanted to keep all original files that assembled together to make the film. The footage was 83.5Gb So I archived them to over 20 4.7Gb DVD recordables in the original .avi format (i.e. data DVD-ROM format, NOT DVD-Video Mpeg2) Some .avi DV video files were larger than 4.7Gb so I used 7-zip to split them ( here is a guide as to how to do that: http://www.linglom.com/2008/10/12/how-to-split-a-large-file-using-7-zip/ ) To recombine them, a dos shell command like this would do that: copy /b file.avi.* file.avi would do the job, where .* is a wild card to include all the split parts e.g. 001, 002...00n assuming they are all in the same directory path folder. file.avi is the recombined file identical to the original. Later on, I bought a LG BE06 LU10 USB 2.0 Super-multi Blu-ray burner and archived the footage to 2 (two) x 50Gb BD-R DL discs. Again in the original format, written as files to a BD-R in the BD-R BD-ROM UDF format readable by PC/Mac etc, NOT Blu-ray video/film format. This seems to be a good solution for me, because: the archive is in a robust, reasonably permanent, non-volatile medium, i.e. DVD recordable / Blu-ray (debates about stability of optical media organic chemical dye compounds/substrates aside) the format of the archive is accessible by open source tools or just plain Windows Explorer and it's not in a proprietary format I just thought I'd ask folks for their experience on better methods, if such exist.

    Read the article

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • How to get a higher resolution on Ubuntu 11.04 using an intel chipset

    - by Saif Bechan
    I have a bit slow PC here, so I decided to put Ubuntu 11.04 on it. It use to run Windows Vista on a resolution of 1280x1024, so both my hardware and monitor support it. Now I'm on Ubuntu, but can only run 1024x768, and the screen is not that bright. Its like when you don't have the right drivers on a Windows machine. Now i'm new to linux, so I do not know what do do. I have an onboard Intel chipset i965. Maybe this is some useful information, I read something about it on a forum: lspci 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 02) 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 02) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation N10/ICH7 Family SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) 03:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Can someone please tell me how I can get the screen better? saif@sodium:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9

    Read the article

  • Lenovo S110 netbook screen resolution Ubuntu

    - by Neigyl R. Noval
    I am still stuck with 800x600 resolution. Here is the output of lspci: 00:00.0 Host bridge: Intel Corporation Device 0bf2 (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Device 0be2 (rev 09) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 02) 00:1c.2 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 3 (rev 02) 00:1d.0 USB Controller: Intel Corporation N10/ICH7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation NM10 Family LPC Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation N10/ICH7 Family SATA AHCI Controller (rev 02) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 02) 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05) 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8176 (rev 01) Also, I tried modifying /usr/lib/X11/xorg.conf.d/10-monitor.conf to fix this problem, but still does not work: Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Screen0" Monitor "Monitor0" Device "Card0" SubSection "Display" Viewport 0 0 Depth 1 Modes "1024x768" EndSubSection SubSection "Display" Viewport 0 0 Depth 4 Modes "1024x768" EndSubSection SubSection "Display" Viewport 0 0 Depth 8 Modes "1024x768" EndSubSection SubSection "Display" Viewport 0 0 Depth 15 Modes "1024x768" EndSubSection SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" EndSubSection EndSection Section "Device" Identifier "Card0" Driver "vesa" VendorName "Intel Corporation Device" EndSection I'm using Gnome. System Preference Monitor screen resolution sticks to 800x600. What am I going to do?

    Read the article

  • web application or wep portal

    - by klo
    as title said differences between those 2. I read all the definition and some articles, but I need information about some other aspects. Here is the thing. We want to build a web site that will contain: site, database, uploads, numerous background services that would have to collect information from uploads and from some other sites, parse them etc...I doubt that there are portlets that fits our specific need so we will have to make them our self. So, questions: 1. Deployment ( and difference in cost if possible), is deploying portals much more easier then web app ( java or .net) 2. Server load. Does portal consume much of server power ( and can you strip portal of thing that you do not use) 3. Implementation and developing of portlets. Can u make all the things that you could have done in java or .net? 4. General thoughts of when to use portals and when classic web app. Tnx all in advence...

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • tod to avi mpg wmv, convert tod (.mpeg-2) to avi mpg wmv for Movie Maker.

    - by yearofhao
    Need to convert .tod (mpeg-2) to avi mpg wmv download from JVC Everio to PC with tod to avi mpg wmv converter convert tod to uncompressed/raw avi, mpg wmv Have a JVC Everio camcorder? Then you may encounter problems when saving the .tod files to your computer windows movie maker says it can't recognize and edit them to make videos. You may play them using media player but the problem is how to edit them? The bundled software Power cinema could be annoying, since you can only edit when the camera is plugged in to the PC - Power cinema can’t seem to edit from the saved clips alone. So, how do you save them to PC so that you can edit them without the camera and also using windows movie maker? JVC Everio Tod to avi mpg wmv converter costs you a penny to but help you perfectly convert tod file to AVI, MPG, WMV, YouTube FLV, MP4, DV, QuickTime.MOV or other common video formats with fast speed and while keeping the original HD quality. High definition TOD recordings from JVC Camcorders can playback fluidly, convert smoothly and edit professionally on with iOrgsoft TOD file converter iOrgsoft tod to avi mpg wmv Converter has been mostly used by Windows users who use Windows 7 or Vista, after the .tod (mpeg-2 the same codec) downloaded from JVC Everio to PC, it’s best to convert tod to avi, convert to xvid divx, convert tod to uncompressed avi or convert tod to raw avi, tod to mpg, tod to wmv, which are three Windows movie maker best formats to import. TOD to avi mpg wmv converter is a competent video-editing program that allows you to clip/cut TOD video clip, crop the video to encode, and help transfers video to devices like iPhone, iPod, to HDTV connected with Apple TV.

    Read the article

  • No headphone or speakers plugged in - Windows 7 issue

    - by Amit Ranjan
    I am facing a wierd issue between my sound card driver and Windows7(any edition). I have a sony vaio notebook (VPCEB24EN). Two days ago, on start I got diabled wifi, charging and speakers. Then I restart the PC and everthing worked fine. Later on I again restarted my machine, and then I found the my speakers were not working. I thought, restart will work. I restarted many more times but it did'nt work. I searched google and found that , it might be due to : 1. Hardware is not switched on from bios. 2. No hardware. 3. overriding 64-bit drivers with 32-bit drivers. To make it working, I restored my laptop from scratched, but while restoring pc, the realtek HD drivers, it gave me an error 505. I then formatted the drive and installed Win7 Ultimate 32bit (With PC was, Win7 64-bit Home Basic). I got lots of yellow exclamation in device manager, thinking now this will resolve my issue. But Even after the installing all drivers on a fresh installation, I was still with the same position. A red cross on speaker- No Speakers or Headphone plugged in. Please Note: My Laptop is Vaio , E Series, VPCEB24EN. Audio : Intel® High Definition Audio compatible but accepts Realted Audio. While using BIOS Agent, I got Intel Chipset 5 Series Audio Adapter and ATI RV370 Audio adapter found on my board. Installed is Win7 32bit Ultimate. factory default was Win7 64-bit Home Basic Memory: RAM 3GB/ 320GB HDD Display : ATI Mobility Radeon™ HD 5145 Graphics

    Read the article

  • Debian: What are these files in /sys/devices/pci0000:00/ for?

    - by muhuk
    I am running Debian Squeeze on an MSI M670 laptop. I have these following files on my root drive, each 256MB: file:///sys/devices/pci0000:00/0000:00:05.0/resource1 file:///sys/devices/pci0000:00/0000:00:05.0/resource1_wc Here is my lspci output: muhuk@debian:~$ lspci 00:00.0 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.2 RAM memory: nVidia Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: nVidia Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: nVidia Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: nVidia Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: nVidia Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: nVidia Corporation C51 Memory Controller 2 (rev a2) 00:03.0 PCI bridge: nVidia Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: nVidia Corporation C51 [GeForce Go 6100] (rev a2) 00:09.0 RAM memory: nVidia Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: nVidia Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: nVidia Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: nVidia Corporation MCP51 PMU (rev a3) 00:0b.0 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB Controller: nVidia Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: nVidia Corporation MCP51 IDE (rev a1) 00:0e.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:0f.0 IDE interface: nVidia Corporation MCP51 Serial ATA Controller (rev a1) 00:10.0 PCI bridge: nVidia Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: nVidia Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: nVidia Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 04:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 04:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 04:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 04:09.0 Network controller: RaLink RT2561/RT61 rev B 802.11g I am speculating these have something to do with the shared RAM my GPU is using. But why a file on disk? And why two of them?

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >