Search Results

Search found 4636 results on 186 pages for 'red gate'.

Page 53/186 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Le week-end de programmation de jeux vidéo sur Developpez.com a démarré ! Venez nous rejoindre sur le chat

    Du 22 au 24 aout, venez programmer un jeu vidéo sur le chat de Developpez.comAmies programmeuses, amis programmeurs,La quatrième édition arrive enfin ! J'ai l'honneur de vous annoncer que vous pouvez dès à présent réserver le week-end du 22 au 24 août pour développer un jeu vidéo avec les membres de Developpez.com. Préparez-vous, commandez les pizza, faites un stock de red bull, expulsez votre copain/copine (sauf s'il/elle sait dessiner), car vous allez passer un week-end intense pour réaliser un...

    Read the article

  • Novell Wins! SCO Loses!

    <b>Computerworld:</b> "Yes, it's true. After just more than 7-years of SCO lawsuits, SCO has lost its last real chance of causing Linux and the companies that support it-IBM; Novell, and Red Hat--any real trouble."

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

  • Override close button

    - by mmimaa
    How can I override the close button near minimize/maximize in such a way that the application doesn't automatically close. I want it to show an exit screen or something like this, so I would like to be able to delay the close and display something on the screen when I press the red close button. At first I thought that I should override the OnExiting() method, but I couldn't get it working. This is my first attempt at creating a fully functional game, so I have no previous experience in game development nor with XNA.

    Read the article

  • Collision detection problem in XNA

    - by Fantasy
    I'm having two problems with my collision detection in XNA. There are two boxes, the red box represents a player and the blue box represents a wall. The first problem is when the player moves to the upper side or bottom side of the wall and collides with it, and then try to go to the left or right, the player will just jump in the opposite direction as seen in the video. However if I go to the right side or the left side of the wall and try to go up or down the player will smoothly go up or down without jumping. The second problem is that when I collide with the box and my key is still pressed down the blue box goes half way through red box and and goes back out and it keeps doing that until I stop pressing the keyboard. its not very clear on the video but the keeps going in and out really fast until I stop pressing the key. Here is a video example:- http://www.youtube.com/watch?v=mKLJsrPviYo and Here is my code Vector2 Position; Rectangle PlayerRectangle, BoxRectangle; float Speed = 0.25f; enum Direction { Up, Right, Down, Left }; Direction direction; protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); KeyboardState keyboardState = Keyboard.GetState(); if (keyboardState.IsKeyDown(Keys.Up)) { Position.Y -= (float)(Speed * gameTime.ElapsedGameTime.TotalMilliseconds); direction = Direction.Up; } if (keyboardState.IsKeyDown(Keys.Down)) { Position.Y += (float)(Speed * gameTime.ElapsedGameTime.TotalMilliseconds); direction = Direction.Down; } if (keyboardState.IsKeyDown(Keys.Right)) { Position.X += (float)(Speed * gameTime.ElapsedGameTime.TotalMilliseconds); direction = Direction.Right; } if (keyboardState.IsKeyDown(Keys.Left)) { Position.X -= (float)(Speed * gameTime.ElapsedGameTime.TotalMilliseconds); direction = Direction.Left; } if (PlayerRectangle.Intersects(BoxRectangle)) { if (direction == Direction.Right) Position.X = BoxRectangle.Left - PlayerRectangle.Width; else if (direction == Direction.Left) Position.X = BoxRectangle.Right; if (direction == Direction.Down) Position.Y = BoxRectangle.Top - PlayerRectangle.Height; else if (direction == Direction.Up) Position.Y = BoxRectangle.Bottom; } PlayerRectangle = new Rectangle((int)Position.X, (int)Position.Y, (int)32, (int)32); base.Update(gameTime); }

    Read the article

  • SEO FAQ For Small Business 7 of 10 - How Long For SEO Results? (SEO Results Time Frame)

    The hours and effort needed to increase the visibility of your small business website depend greatly on your marketing plan and which keyword you're going after. A year-long optimization campaign is not unheard of for a large site. If your site is less than three or four months old, or the domain is expiring in less than a year, you may have issues with high rank despite your best efforts as these are 'red flags' to the search engines.

    Read the article

  • High Load mysql on Debian server

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ?

    Read the article

  • Blank black screen with cursor after login -- RHEL5

    - by Sean O.
    I have a RHEL 5 machine here which is a Dell Precision T3500. I'm an Ubuntu guy, but I'm having a heck of a time with this machine. After processing its first security update, we cannot log in via the gdm greeter. A new kernel was installed; then I installed the nVidia drivers for our Quadro NVS 295. I know the X configuration is valid because the gdm greeter does display; however, upon login all we can get is a blank, black screen with a cursor. I thought perhaps our python installation was corrupted but a reinstall via yum has not helped. I have searched & googled extensively for a potential fix for this and can find nothing. Below are outputs from uname, a tail of an error in /var/log/messages, and the Xorg.conf. Can anyone suggest a course of action? [sean@cheetah ~]$ uname -a Linux cheetah.*.* 2.6.18-308.8.1.el5 #1 SMP Fri May 4 16:43:02 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux [sean@cheetah ~]$ sudo tail /var/log/messages Jun 5 15:03:04 cheetah gconfd (sean-4592): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2 Jun 5 15:03:05 cheetah hcid[3855]: Default passkey agent (:1.8, /org/bluez/applet) registered Jun 5 15:03:05 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:03:05 cheetah last message repeated 2 times Jun 5 15:03:06 cheetah gconfd (sean-4592): Resolved address "xml:readwrite:/home/sean/.gconf" to a writable configuration source at position 0 Jun 5 15:03:06 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:07 cheetah setroubleshoot: [program.ERROR] exception ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Traceback (most recent call last): File "/usr/bin/sealert", line 952, in ? from setroubleshoot.gui_utils import * File "/usr/lib/python2.4/site-packages/setroubleshoot/gui_utils.py", line 26, in ? import gtk File "/usr/lib64/python2.4/site-packages/gtk-2.0/gtk/__init__.py", line 48, in ? from gtk import _gtk ImportError: /usr/lib/libatk-1.0.so.0: undefined symbol: g_assertion_message_expr Jun 5 15:03:08 cheetah pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 0 Not Found Jun 5 15:07:01 cheetah ntpd[4114]: synchronized to 64.16.211.38, stratum 3 Jun 5 15:07:01 cheetah ntpd[4114]: kernel time sync enabled 0001 [sean@cheetah ~]$ cat /etc/X11/xorg.conf # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 295.53 ([email protected]) Sat May 12 00:34:20 PDT 2012 # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "single head configuration" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" ### Comment all HorizSync and VertSync values to use DDC: ### Comment all HorizSync and VertSync values to use DDC: Identifier "Monitor0" ModelName "LCD Panel 1600x1200" HorizSync 31.5 - 74.7 VertRefresh 56.0 - 65.0 Option "dpms" EndSection Section "Device" Identifier "Videocard0" Driver "nvidia" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • IBM HS23 Blade Server (7875) onboard NIC driver for linux

    - by Igor Spivak
    I work with IBM HS23 Blade Server (7875). It's onboard NIC adapter is: Emulex OCl11104-F-X Virtual Fabric Adapter 2-port 10GB and 2-port 1GB LOM . I'm tryed to the following Linux OS with the server: 2.6.32-22-generic-pae #36-Ubuntu SMP. and discovered my OS has not proper Network drive installed (for the NIC adapter described above). After investigation I made, I discovered that the driver I need is "be2net" placed in "net" directory of the linux under the folder "be2net". I managed to download this driver with the latest package for my kernel. Driver info ("modinfo be2net" result) is as follows: --------------------------------------------------------------------------------------- filename: /lib/modules/2.6.32-22-generic-pae/kernel/drivers/net/benet/be2net.ko license: GPL author: ServerEngines Corporation description: ServerEngines BladeEngine2 10Gbps NICDriver 2.101.205 version: 2.101.205 srcversion: 199ADD251CB874C3727CC47 alias: pci:v000019A2d00000710sv*sd*bc*sc*i* alias: pci:v000019A2d00000701sv*sd*bc*sc*i* alias: pci:v000019A2d00000700sv*sd*bc*sc*i* alias: pci:v000019A2d00000221sv*sd*bc*sc*i* alias: pci:v000019A2d00000211sv*sd*bc*sc*i* depends: vermagic: 2.6.32-22-generic-pae SMP mod_unload modversions 586TSC parm: rx_frag_size:Size of a fragment that holds rcvd data. (uint) --------------------------------------------------------------------------------------- After starting linux, I get the following error: be2net 0000:16:00.x: Emulex OneConnect 10Gbps NIC (be3) initilization failed. I checked the same server with another Linux version (Red-Had 5.5.1.0) and the NICs worked properly, so seems there is no problem in HW. Also, on IBM or Emulex offical sites I managed to find drivers only for Red-Had and SUSE versions.

    Read the article

  • FFMPEG Install on EC2 - Amazon Linux

    - by Oliver Holmberg
    Hello Serverfault friends, I am about two days into attempting to install FFMPEG with dependencies on an AWS EC2 instance running the Amazon Linux AMI. I've installed FFMPEG on Ubuntu and Fedora systems with no problems in the past, and have read reportedly successful instructions on installing on Red Hat/Fedora. I have followed a number of tutorials and forum articles to do so, but have had no luck yet. As far as I can tell, the main problems are as followed: The amazon linux (Most similar to red-hat/centos) yum repositories don't have ffmpeg available. I have found instructions to update the repositories to include the required packages, but adding these repositories cause yum to fail in updating packages. (Also, I've read some cautionary tales about adding redhat/centos repositories to amazon linux that lead me to believe it may be a bad idea) (https://forums.aws.amazon.com/thread.jspa?messageID=229166) I have tried a more complicated method of downloading the source tarball, compiling, and installing, but this always fails due to missing dependencies and other errors. On to my question: Has anyone successfully installed FFMPEG on Amazon Linux? Is there a fundamental incompatibility? If anyone could share specific instructions on installing ffmpeg on amazon linux I would be greatly appreciative. Any other insights/experiences would also be appreciated. Thanks in advance, Oliver

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • Endian Destination NAT

    - by Ben Swinburne
    I have installed Endian Community Firewall 2.3 and am clearly misunderstanding/doing something wrong with it. I'm trying to create some destination NAT rules to allow incoming connections to various services within the network. Router - RED I/F - x.x.x.x Router - GREEN I/F - 192.168.11.253 ECF - RED I/F - 192.168.11.254/24 ECF - GREEN I/F - 192.168.12.254/24 Target server - 192.168.12.1 Please ignore the haphazard choice of subnets and addresses- I'm trying to quickly plop Endian into an existing network before a complete rework in 6-12 months so for now. Everything works except destination NAT, so outgoing connections are fine, the routes between the two subnets are OK etc. I want to create various incoming NATs but let's take for the sake of argument, SMTP port 25 from the Internet to Target server 192.168.12.1. I've tried almost every combination of options in the Destination NAT section to achieve this and clearly am doing something wrong. I suspect my confusion must be somewhere in the Access From and/or Target section. The rest seems OK Filter Policy = Allow Service = SMTP Protocol = TCP Port = 25 Translate to type = IP DNAT Policy = NAT Insert IP = 192.168.12.1 Port Range = 25 Enabled = Checked Position = First I can't work out what I'm doing wrong, or am I doing it right and it's just not working!? Any help would be greatly appreciated.

    Read the article

  • UPS with a HP Proliant server

    - by Groo
    We placed a EATON Ellipse Max 1500 (900W) as the UPS for our HP Proliant ML350 G6. Upon first power failure (actually we only moved the UPS' input plug to a different socket), server immediatelly turned off, and the Health LED turned red and started blinking. UPS was in operation for about a week before that, with battery fully charged to 100%. Since our server's hot-plug supply is 460W, we are pretty sure we haven't overloaded it, the server was completely idle at that time (no web or win apps running except Windows Server core services). Then we tried to do the same with a different, no-name older PC (Core 2 Duo, 2Gb RAM) with a generic power supply (not sure what the power is) and it continued working when we pulled the plug out. UPS load was less than 15% (measured in the provided Eaton utility). We measured the UPS' output voltage using a smart oscilloscope and the THD of the UPS output waveform turned out to be 40%. Did you have similar experiences? Could this be a faulty UPS? Or a faulty power supply? Or some HP sensors configured to trigger too strictly? I wouldn't like replacing this UPS with the same brand, to get same results. [Edit] I also tried to do this while the server is turned off. While the UPS is working on battery, server will not start - as soon as I press the power button, Health LED starts blinking red.

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • Why do I get this message from chrome when navigating to https://www.amazon.com?

    - by Denis
    This is probably not the site you are looking for! You attempted to reach www.amazon.com, but instead you actually reached a server identifying itself as *.voxcdn.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of www.amazon.com. Intermittently, I get a blank page when going to http://www.amazon.com. So I stuck an 's' in the URL, making it https://www.amazon.com and got that message above (with the nice red screen) from Chrome indicating there might be some monkey business going on. After hammering on the URL a bunch of times and pulling it up in Chrome's developer tool to look at the network traffic on it, the url (without the s) started behaving. The url with the s just hangs, but the red screen no longer comes up. Some specs... I've got a macBook Pro, Snow Leopard, Time Warner cable. I've had enough strange stuff happening over the past couple months (google.com, youtube.com, amazon.com not coming up or loading strange error messages with random reference numbers) that I finally decided to switch to OpenDNS. Still having problems, though.

    Read the article

  • Where do I connect the HDD LED wires on my RAID adapter?

    - by Giffyguy
    I'm using a Promise FastTrak TX8660 with RAID 5. The manual (and Google) just doesn't seem to explain how exactly to connect a standard two-pin HDD LED wire to the eight available pins on the card. The Manual just says - To connect your LED, follow the following diagram: The card itself resembles the diagram: But it doesn't make any sense to me. All I have is a two-pin connecter for HDD LED on the front of my computer case. I don't need anything fancy like the fault LED or seperate indicators for each drive. I just want to be able to see when my RAID 5 array is working, that's all. I don't know what the "R" and "G" stand for, but my HDD LED wires are red and white. I tried connecting the red wire to the "R" pin and the white wire to the "G" pin, but that just makes the LED on the front of my case light up indefinitely, even when the computer is idle. Which pins am I suppose to connect the HDD LED header to for basic activity indication?

    Read the article

  • Software to clean up photos of whiteboards and documents?

    - by Norman Ramsey
    I take a lot of photos of whiteboards, blackboards, and so on for teaching purposes (examples online through May 2010). I'm interested in cleaning them up for archival purposes, preferably using Linux. Commercial products ClearBoard and PhotoNote are priced a little aggressively for my purposes, plus my students would like to have this capability too. Does anyone know of any good, open source software for Converting photographs to images with just a few colors? Eliminating perspective distortion? Removing unwanted junk from around the edges of an image? or anything like that? I'm imagining that I start out with a picture of my whiteboard using red and black markers, and I end up with a three-color image using just white, red, and black. Or I photograph a laser-printed document and end up with a clean black-and-white image. I have tried standard tools that reduce the number of colors in an image, and they do a terrible job—probably because they are trying to reproduce the uneven illumination of the original image. Command-line Linux tools would be ideal.

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >