Search Results

Search found 115 results on 5 pages for 'timothy khouri'.

Page 2/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • LINQ-to-SQL: How can I prevent 'objects you are adding to the designer use a different data connecti

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • How can I prevent 'objects you are adding to the designer use a different data connection...'?

    - by Timothy Khouri
    I am using Visual Studio 2010, and I have a LINQ-to-SQL DBML file that my colleagues and I are using for this project. We have a connection string in the web.config file that the DBML is using. However, when I drag a new table from my "Server Explorer" onto the DBML file... I get presented with a dialog that demands that do one of these two options: Allow visual studio to change the connection string to match the one in my solution explorer. Cancel the operation (meaning, I don't get my table). I don't really care too much about the debate as why the PMs/devs who made this tool didn't allow a third option - "Create the object anyway - don't worry, I'm a developer!" What I am thinking would be a good solution is if I can create a connection in the Server Explorer - WITHOUT A WIZARD. If I can just paste a connection string, that would be awesome! Because then the DBML designer won't freak out on me :O) If anyone knows the answer to this question, or how to do the above, please lemme know!

    Read the article

  • In TSQL (SQL Server), How do I insert multiple rows WITHOUT repeating the "INSERT INTO dbo.Blah" par

    - by Timothy Khouri
    I know I've done this before years ago, but I can't remember the syntax, and I can't find it anywhere due to pulling up tons of help docs and articles about "bulk imports". Here's what I want to do, but the syntax is not exactly right... please, someone who has done this before, help me out :) INSERT INTO dbo.MyTable (ID, Name) VALUES (123, 'Timmy'), (124, 'Jonny'), (125, 'Sally') I know that this is close to the right syntax. I might need the word "BULK" in there, or something, I can't remember. Any idea?

    Read the article

  • WPF: Once I set a property in code, it ignores XAML binding forever more... how do I prevent that?

    - by Timothy Khouri
    I have a button that has a datatrigger that is used to disable the button if a certain property is not set to true: <Button Name="ExtendButton" Click="ExtendButton_Click" Margin="0,0,0,8"> <Button.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding IsConnected}" Value="False"> <Setter Property="Button.IsEnabled" Value="False" /> </DataTrigger> </Style.Triggers> </Style> </Button.Style> That's some very simple binding, and it works perfectly. I can set "IsConnected" true and false and true and false and true and false, and I love to see my button just auto-magically become disabled, then enabled, etc. etc. However, in my Button_Click event... I want to: Disable the button (by using ExtendButton.IsEnabled = false;) Run some asynchronous code (that hits a server... takes about 1 second). Re-enable the button (by using ExtendButton.IsEnabled = true;) The problem is, the very instant that I manually set IsEnabled to either true or false... my XAML binding will never fire again. This makes me very sad :( I wish that IsEnabled was tri-state... and that true meant true, false meant false and null meant inherit. But that is not the case, so what do I do?

    Read the article

  • ClickOnce: How do I pass a querystring value to my app *through the installer*?

    - by Timothy Khouri
    My company currently builds separate MSI's for all of our clients, even though the app is 100% the same across the board (with a single exception, an ID in the app.config). I would like to show them that we can publish in once place with ClickOnce, and simply add a query string parameter for each client's installer. Example: http://mysite.com/setup.exe?ID=1234-56-7890 The issue that I'm having is that the above ("ID=1234...") is not being passed along to the "myapplication.application". What is happening instead is, the app is being installed successfully, and it is running the first time with an activation context, but the "ActivationUri" does not contain any query string values. Is there a way to pass query string values FROM THE INSTALLER URL to the application's launch URL? If so, how?

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • In WPF, how do I update the object that my custom property is bound to?

    - by Timothy Khouri
    I have a custom property that works perfectly, except when it's bound to an object. The reason is that once the following code is executed: base.SetValue(ValueProperty, value); ... then my control is no longer bound. I know this because calling: base.GetBindingExpression(ValueProperty); ... returns the binding object perfectly - UNTIL I call base.SetValue. So my question is, how do I pass the new "value" on to the object that I'm bound to?

    Read the article

  • Sleep/Suspend and WOL on FreeNAS

    - by Timothy R. Butler
    I am trying to figure out how to get FreeNAS 8 to sleep when inactive and, ideally, wake on lan activity (or, less ideally, wake on a WOL magic packet). However, as I've tried to search for information on how to do this, almost all discussions seem to be centered on FreeNAS 7. Also, the tools included in FreeBSD to do this seem to be missing (i.e. acpiconf, etc.). Is there a way to get FreeNAS 8 to sleep and wake so that I don't have to leave the server running all the time? Given its usage level, it seems a waste to have the server running constantly.

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • Refurbished System Windows License Key and OEM Media

    - by Timothy R. Butler
    According to this question, it is legal to use a Windows 7 OEM license that is presently installed as a 32-bit install with a 64-bit version of Windows 7. With that in mind, I purchased several refurbished systems through TigerDirect. When I received the computers today, I found that they have a Windows 7 license key attached to them that says it is a "refurbished key." A flyer in the box also seemed to imply that this key would not work with regular OEM media. Has anyone tried using regular OEM media with a refurbished key? I had hoped to create a new 64-bit WIM image that I could use on these systems, but I don't want to try replacing the default install with this new 64-bit install only to find that the key won't validate. If it requires a special customized image, is it possible to convert another type of Windows 7 disc into the required sort much as one can convert a retail disc to an OEM one (and vise versa)?

    Read the article

  • Long Gigabit Ethernet Run

    - by Timothy R. Butler
    I am trying to get an Gig-E network between two buildings that are approximately 260 ft. away. While some TRENDnet switches failed to be able to connect to each other over Cat 6 at that distance, two Netgear 5-port Gig-E switches do so just fine. However, it still fails after I put in place APC PNET1GB ethernet surge protectors at each end before the line connects to the respective switches. So I find myself wondering if I simply need to find a better surge protector that doesn't degrade the signal as much (if so, what kind would you recommend?) or if I should give up on copper and use fiber between the buildings. If I opt to go the latter route, I could really use some pointers. It looks like LC connectors are the most common, but I keep running into some others as well. A media converter on each end seems like the simplest solution, but perhaps a Gig-E switch with an SFP port would make more sense? Given a very limited budget, sticking with my existing copper seems best, but if it is bound to be a headache, a 100 meter fiber cable is something I think I can swing cost wise.

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Intel Core i7-4960HQ vs. 4850HQ (Haswell) [on hold]

    - by Timothy R. Butler
    I'm looking at the new MacBook Pros and trying to decide between the Core i7-4960HQ (2.6 GHz) and i7-4850 (2.3 GHz). I've found some synthetic benchmarks comparing them, but I haven't found a lot of data, so I'd appreciate any pointers to good comparisons for the Haswell family (especially these two processors). My cursory analysis seems to suggest there isn't a huge gain from the extra 300 MHz. I'd like to determine not only whether this is generally true, but also to figure out if the gains that are made in performance come at too high of cost. Is the 2.6 going to be pushing the limits of what can fit in a thin laptop without overheating? I've looked at some of Intel's documentation, but have not been able to determine what the normal and maximum operating temperature differences are for the models. In the past, there have been times that Intel's fastest models in a given range ran especially hot and/or consumed significantly more power compared to slightly slower models. Do those concerns factor into the current generation?

    Read the article

  • Why is (free_space + used_space) != total_size in df? [migrated]

    - by Timothy Jones
    I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one.

    Read the article

  • Exchange Disconnecting on EHLO with remote telnet

    - by Timothy Baldridge
    When I go to the local terminal on my Exchange box (SBS 2008) I can do this: telnet 127.0.0.1 25 220 Exchange banner here EHLO example.com 250 Server name However when I go from another box, or from the actual IP of the server I get this: telnet 192.168.21.20 25 220 Exchange banner here EHLO example.com 421 4.4.1 Connection timed out Connection to host lost. The odd thing is, this server is currently in production and working fine (receiving mail for our entire domain). But my C# programs can't send mail to it (they get this same error). Any ideas?

    Read the article

  • FreeNAS AFP Doesn't Authenticate

    - by Timothy R. Butler
    I just set up a FreeNAS 8.0.3 server and am trying to use its AFP (Netatalk) service to access it via a Mac OS X Lion system. I created the ZFS volume, set its permissions to include my user in its owner group (and set group write permissions), created an AFP share with AFP3 and told that share to "allow" @uninet (my group). I have a user on the server named tbutler, matching the user on my Mac. I can see the server, "Beatrice," in Finder. When I try to login in Finder using "Connect As...," the user "tbutler" and the proper password, I am returned to the main Finder window with the black bar now saying "Connection Failed." Here's the most recent data from /var/messages on the server, which shows me trying to login both as a "Registered User" and a "Guest": Jul 30 00:29:07 freenas afpd[8972]: AFP3.3 Login by nobody Jul 30 00:29:08 freenas afpd[8972]: AFP logout by nobody Jul 30 00:29:08 freenas afpd[8972]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:08 freenas afpd[8972]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:08 freenas afpd[8972]: AFP statistics: 0.14 KB read, 0.12 KB written Jul 30 00:29:14 freenas afpd[8975]: AFP3.3 Login by tbutler Jul 30 00:29:14 freenas afpd[8975]: AFP logout by tbutler Jul 30 00:29:14 freenas afpd[8975]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:14 freenas afpd[8975]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:14 freenas afpd[8975]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:20 freenas afpd[8978]: AFP3.3 Login by tbutler Jul 30 00:29:20 freenas afpd[8978]: AFP logout by tbutler Jul 30 00:29:20 freenas afpd[8978]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:20 freenas afpd[8978]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:20 freenas afpd[8978]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:27 freenas afpd[8983]: AFP3.3 Login by nobody (My clock is clearly not properly set, but be that as it may...) Any suggestions? UPDATE: Apparently this problem occurs if one gives the AFP share a password in the AFP share settings box. When I removed the password and tried to login using a user account again, it worked just fine.

    Read the article

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • nginx projects in subfolders

    - by Timothy
    I'm getting frustrated with my nginx configuration and so I'm asking for help in writing my config file to serve multiple projects from sub-directories in the same root. This isn't virtual hosting as they are all using the same host value. Perhaps an example will clarify my attempt: request 192.168.1.1/ should serve index.php from /var/www/public/ request 192.168.1.1/wiki/ should serve index.php from /var/www/wiki/public/ request 192.168.1.1/blog/ should serve index.php from /var/www/blog/public/ These projects are using PHP and use fastcgi. My current configuration is very minimal. server { listen 80 default; server_name localhost; access_log /var/log/nginx/localhost.access.log; root /var/www; index index.php index.html; location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name; include fastcgi_params; } } I've tried various things with alias and rewrite but was not able to get things set correctly for fastcgi. It seems there should be a more eloquent way than writing location blocks and duplicating root, index, SCRIPT_FILENAME, etc. Any pointers to get me headed in the right direction are appreciated.

    Read the article

  • How to Install OS without DVD and USB boot

    - by Timothy James Reed
    I just purchased a used Dell F1D 1U rack mount server and would like to install Ubuntu or ESXi with Virtual Disks or anything for that matter. I'v read that Dell's have built in DRAC so you can access it remotely. There are 3 ethernet plugs in the back but I dont know which one to use. In the BIOS it says I can configure Remote access on [com1] or [com2] not sure if that is ethernet 1 & 2. I also set it up so to use a static IP adress. Thats as far as I have gone. Not sure what to do next. I'v Tried to do a PXE server with TFTP but get stuck at Error "cant locate file" or something like that. Not even sure I want to go that route anymore because of all the hassel of editing files. All my computers are OSX or Linux and the only Windows I have is via VMWare. What steps to i do now?

    Read the article

  • Stop windows-7 explorer taskbar icon changing for different locations

    - by Timothy Kane
    If I open an instance of Windows Explorer, a button appears on the taskbar; its icon matches the icon of current location in Explorer. E.g. If I've clicked on a drive (e.g. C:) then the taskbar icon changes to the drive icon. If I select the Pictures Library location, the taskbar icon changes to a little picture symbol. How can I stop this behaviour, so that the taskbar icon always shows the same icon, regardless of where I've clicked in Explorer?

    Read the article

  • Install Windows 8 Apps to custom directory

    - by Timothy Ford
    I'm running Windows 8 Pro on a laptop. I initially installed the OS just to fiddle round with it and get to know something I'd inevitably end up having to fix for someone. The problem being I only gave the partition about 32gb and that ran out pretty quick. I'd like to install office and get used to that as well but unfortunately I'm a few gigabytes short. I'd like to know if I can change the install location of office 2013 and other apps so that I can run them from another partition because there is no space left to the right of this partition I can't extend it.

    Read the article

  • Very Hot P4 3.4 GHz

    - by Timothy R. Butler
    My Pentium 4 3.4 GHz is running hot even with a new fan on my case's heatsink (80mm rated at ~ 40 CFM). If I stress the CPU, it goes over Intel's maximum temperature rating. I'm contemplating putting in a "PCI cooler" (i.e. a fan that fits into an empty PCI slot) to see if I could get better airflow in the case. The slot isn't far from the processor and heatpipes that lead from the processor to the fan, so I'm hopeful this would help, but not entirely certain. Thoughts?

    Read the article

  • Dropbox.py on RHEL 6

    - by Timothy R. Butler
    I'm trying to run a headless install of Dropbox on RHEL 6. The daemon seems to be running, but when I try to use Dropbox's associated dropbox.py tool to control the daemon, it fails to run with the following error: Traceback (most recent call last): File "./dropbox.py", line 26, in <module> import locale File "/usr/lib64/python2.6/locale.py", line 202, in <module> import re, operator ImportError: /home/dropbox/.dropbox-dist/operator.so: undefined symbol: _PyUnicodeUCS2_AsDefaultEncodedString I'm running the current RHEL build of Python 2.6: root@cedar [/home/dropbox/.dropbox-dist]# rpm -qv python python-2.6.6-29.el6_3.3.x86_64 (I'm not sure if this would be better suited to Stack Overflow since it is on the verge of being a programming issue, but since I'm trying to use a program straight from Dropbox, I placed it here.)

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >