Search Results

Search found 12666 results on 507 pages for 'knowledge base'.

Page 393/507 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Ruby net:LDAP returns "code = 53 message = Unwilling to perform" error

    - by Yong
    Hi, I am getting this error "code = 53, message = Unwilling to perform" while I am traversing the eDirectory treebase = "ou=Users,o=MTC". My ruby script can read about 126 entries from eDirectory and then it stops and prints out this error. I do not have any clue of why this is happening. I am using the ruby net:LDAP library version 0.0.4. The following is an excerpt of the code. require 'rubygems' require 'net/ldap' ldap = Net::LDAP.new :host => "10.121.121.112", :port => 389, :auth => {:method => :simple, :username => "cn=abc,ou=Users,o=MTC", :password => "123" } filter = Net::LDAP::Filter.eq( "mail", "*mtc.ca.gov" ) treebase = "ou=Users,o=MTC" attrs = ["mail", "uid", "cn", "ou", "fullname"] i = 0 ldap.search( :base => treebase, :attributes => attrs, :filter => filter ) do |entry| puts "DN: #{entry.dn}" i += 1 entry.each do |attribute, values| puts " #{attribute}:" values.each do |value| puts " --->#{value}" end end end puts "Total #{i} entries found." p ldap.get_operation_result Here is the output and the error at the end. Thank you very much for your help. DN: cn=uvogle,ou=Users,o=MTC mail: --->[email protected] fullname: --->Ursula Vogler ou: --->Legislation and Public Affairs dn: --->cn=uvogle,ou=Users,o=MTC cn: --->uvogle Total 126 entries found. OpenStruct code=53, message="Unwilling to perform"

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Wireless card on HP laptop not working

    - by D. Strout
    I just bought an HP Envy m6-1125dx online from Best Buy. When I got it home and started it up, the wireless card did not work well - at all. I could connect, but any real usage would cause the connection to start dropping every 30 seconds or so, and it would be really slow. Taking another look at the reviews on the Best Buy site, it seems only a few others had this problem, so I took it to my local Best Buy and exchanged it for another unit. Got it home again and the card had the same issues. Which leads to my dilemma. First: does this model have several different cards that it could come with? Mine is a Ralink RT5390R (on both units I received). If it does, then I can keep exchanging until I get a unit with a different card. I wouldn't ask this, except it seems weird that only a few people mentioned this issue, so I thought that might be one possibility. I looked in to replacing the card with a different one myself, but it seems that HP blocks certain wireless cards. However, some people reported success in replacing the card, and this site said it was only an issue on "older HP computer[s]". Can anyone confirm this? Finally, if that fails/will not work, does anyone know what I can get through Best Buy? I am concerned that they will not put any different card than the Ralink, and after two of those, I don't want that. Can I ask Best Buy support to use a different card? Can they even get another card from HP? I guess the base question is: should I attempt to replace the card myself (two days via Amazon to get a new card), should I try to get the laptop repaired through Best Buy (two - four weeks), should I go for a different model laptop from Best Buy, or should I try a different unit of the same model (three's the charm?).

    Read the article

  • Wireless does not work on Ubuntu 9.04

    - by Yongwei Xing
    Hi all I install the Ubuntu 9.04 my old Lenovo Y520 laptop, the wirless does not work.My Wireless card is Intel Pro/wireless 2100 card. But I can not enable it. My wired card is working well. Does anyone meet it before. the ifconfig output is eth0 Link encap:Ethernet HWaddr 00:0a:e4:5f:6c:30 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:973 errors:0 dropped:0 overruns:0 frame:0 TX packets:1025 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:574701 (574.7 KB) TX bytes:169249 (169.2 KB) Interrupt:10 eth1 Link encap:Ethernet HWaddr 00:0c:f1:58:79:b5 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:10 Base address:0x8000 Memory:d0202000-d0202fff lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 B) TX bytes:480 (480.0 B) the output of iwconfig is eth1 unassociated ESSID:off/any Nickname:"ipw2100" Mode:Managed Channel=0 Access Point: Not-Associated Bit Rate:0 kb/s Tx-Power:off Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I have another question. When my OS is 9.04, there is a icon about network connection on the panel at the top. After I upgraded to 9.10, that icon disappeared. How can I get that back? Best Regareds,

    Read the article

  • Setting cfengine3 class based on command output

    - by gnomie
    This question is very similar to How can I use the output of a command in cfengine3 but the answer does not apply in my case I believe. I want to update a git repository via "git pull" and based on whether that lead to changes trigger some follow up action. Simplified, if there was something like "match output and set class" via some body if_output_matches I would want to use something like this: bundle agent updateRepo { commands: "/usr/bin/git pull" contain => setuidgiddir_sh("$(globals.user)","$(globals.group)","$(target)"), classes => if_output_matches("Already up-to-date.","no_update"); reports: no_update:: "nothing updated"; } body contain setuidgiddir_sh(owner,group,folder) { exec_owner => "$(owner)"; exec_group => "$(group)"; useshell => "true"; chdir => "$(folder)"; } So, is it possible to use the output of a - possibly expensive command - and base some decision on that? The execresult function is no good choice for me as a) the pull may become expensive at times (not recommended following the cfengine3 reference) and b) does not allow to specify user, group, working dir - which is important in my case. The repository is in user space and not owned by root.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • 10 GigE interfaces limits single connection throughput to 1 Gb on a ProCurve 4208vl

    - by wazoox
    The setup is as follow : 3 Linux servers with Intel CX4 10 GigE controllers and an X-Serve with a Myricom 10 GigE CX4 controller are connected to a ProCurve 4208vl switch, with a myriad of other machines connected through good ol' 1000 base-T. The interfaces are actually set up as 10 Gig, according to both the switch monitoring interface and the servers (ethtool, etc). However a single connection between two 10 GigE equipped machines through the switch is limited to exactly 1Gb. If I connect two of the 10 GigE machines directly with a CX4 cable, netperf reports the link bandwidth as 9000 Mb/s. NFS achieves about 550 MB/s transfers. But when I'm using the switch, the connection tops at 950 Mb/s through netperf and 110 MB/s with NFS. When I open several connections from 3 of the machines to the 4th, I get 350 MB/s of NFS transfer speed. So each individual 10 GigE ports actually can reach much more than 1 Gb, but individual connections are strictly limited to 1 Gb. Conclusion : the 10 GigE connection through the switch behaves exactly like a trunk of 10 1 Gb connections. That doesn't make any sense to me, unless HP planned these ports only for cascading switches or strictly for many-clients-to-single-server connection. Unfortunately this is NOT the envisioned setup, we need big throughput from machine to machine. Is this a not-so-known (or carefully hidden...) limitation of this type of switch? Should I suggest seppuku to the HP representative? Does anyone have any idea on how to enable a proper behaviour ? I upgraded for an hefty price from bonded 1Gb links to 10 GigE and see exactly ZERO gain! That's absolutely unacceptable.

    Read the article

  • Mysterious Windows 7 slowdown problem

    - by cletus
    I have a fairly beefy machine: Intel Q9450 8GB DDR2800 (4x2) Intel X25-M G2 80GB SSD Several other hard drives Windows 7 Ultimate 64 In the last month I've gotten a mysterious slowdown problem. When I start my IDE (IntelliJ IDEA) it usually takes about 20 seconds on the SSD. If my machine has been on for a day or two (as far as I can tell this is the only pattern) and I try to start the IDE, it brings my machine to a halt. CPU usage goes up to 25% per core (so it's basically 100% usage) and it takes up to 5 minutes to start. Other things I've noticed: iTunes will start to skip and stutter (my music is running off a second hard drive). The only persistent things I'm running are: AVG Anti-Virus Spybot (the slowdown predates this) Hamachi and Murmur (again the slowdown predates this) Apple Airport Base Agent HP OfficeJet 8500 driver/manager The browser I use is Chrome. I can't think why that'd be relevant but it's always on so I thought I'd mention it. When this happens I can't see a reason for it in the process list. No CPU hogs. No spikes in IO activity that I can see. Basically I'm at a loss to explain it and need to reboot, at which point everything returns to normal (for awhile). FWIW the Intel SSD is about 75-80% full. I know being too full can really degrade performance. I don't believe that's the issue here. Does anyone have any ideas on what I can do to fix this or at least help find what's going wrong? This same machine (sans SSD) could run Win XP and stay up fine for a month or two.

    Read the article

  • How does VirtualBox's memory usage work?

    - by DrFredEdison
    I've been running several VM's with VirtualBox, and the memory usage reported from various perspectives, and I'm having trouble figuring how much memory my VMs actually use. Here is an example: I have a VM running Windows 7 (as the Guest OS) on my windows XP Host machine. The Host Machine Has 3 GB of RAM The Guest VM is setup to have a base memory of 1 GB If I run Task Manger on the Guest OS, I see memory usage of 430 MB If I run Task Manger on the host OS, I see 3 processes that seem to belong to VirtualBox: VirtualBox.exe (1), using 60 MB of memory (This one seems to have the most CPU usage) VirtualBox.exe (2), using 20 MB of memory VBoxSvc.exe, using 11.5 MB of memory While running the VM, the Host OS's memory usage is about 2 GB When I shut down the VM, the Host OS's it goes back to memory usage goes down to about 900 MB So clearly, there are some huge differences here. I really don't understand how the GuestOS can use 400+ MB, while the Host OS only shows about 75 MB allocated to the VM. Are there other processes used by VirtualBox that aren't as obviously named? Also, I'd like to know if I run a machine with 1 GB, is that going to take 1 GB away from my host OS, or only the amount of memory the Guest machine is currently using? update Somene expressed distrust over my memory usage numbers, and I'm not sure if that distrust was directed at me, or my Host OS's Task Manager's reporting (which is perhaps the culprit), but for any skeptics, here is a screenshot of those processes on the host machine:

    Read the article

  • How to set up hosts file for local environment?

    - by n00b0101
    I'm trying to create subdomains on my localhost and am way out of my territory... I'm running MAMP on my Mac OS X and I thought/think I had/have to do the following: (Assuming I want to create me.localhost.com and you.localhost.com) (1) Edit /private/etc/hosts Right now, it looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost So, do I just make it: 127.0.0.1 localhost 127.0.0.1 me.localhost.com 127.0.0.1 you.localhost.com 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost (2) I'm assuming I don't need to mess with DNS at all because it's local? So, the hosts file should suffice? (3) And then, I need to edit my httpd.conf file to include virtual hosts? I tried this, but it's not picking it up... NameVirtualHost * <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/me.localhost.com" ServerName me.localhost.com </VirtualHost> <VirtualHost *> DocumentRoot "/Applications/MAMP/htdocs/you.localhost.com" ServerName you.localhost.com </VirtualHost> Not sure if I'm way off-base here... Help is greatly appreciated!

    Read the article

  • installing wxGTK-devel on CentOS 5.4

    - by jackhab
    I'm trying to install wxGTK-devel on CentOS and since it's not in the base repo I added RPMForge. But now I'm getting these broken dependencies. I don't want start tampering with separate rpms because I suspect it will make thing worse. I remember installing this package from RPMForge without a problem several months ago. Please, advise. ... wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) wxGTK-2.8.10-1.el4.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstreamer-0.8.so.1()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstinterfaces-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge) Error: Missing Dependency: libgstgconf-0.8.so.0()(64bit) is needed by package wxGTK-2.8.10-1.el4.rf.x86_64 (rpmforge)

    Read the article

  • Asus K50I sound issues

    - by MrStatic
    I have an Asus K50IJ (Bestbuy) laptop and have issues with my sound. Speakers themselves work fine but when I plug into the headphone jack it auto mutes the front channel and no sounds comes out of either the speakers or the headphones. If I then unmute the channel I get sound from both the speakers and the headphones. alsamixer shows the Headphone channel as all grayed out. /etc/modprobe.d/alsa-base.conf I have tried snd-hda-intel model="asus-laptop" and snd-hda-intel model="asus" In Sound Preferences I have gone to output and changed the Connector to 'Analog Headphones' that results in no sound from either speakers or headphones. As one forum suggested I tried to comment out blacklist snd_pcsp in the blacklist.conf which resulted in no change. lspci -v shows: 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) Subsystem: Santa Cruz Operation Device 1043 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at fe9f4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Inactive users in windows server after some time according to first login instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Windows Server users so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after first login. The user might not need his account right when purchases the license. In another words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. CLARIFICATION (Update after MDMarra's comment) Working environment is Windows Server 2012. By the word 'user', I mean Native Windows Server Users. Whenever a new person purchases a license with me, I create them manually using net user command like this: net user ali pass /add /expires:2013-12-25

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Windows 7 BSOD on startup

    - by Cristy
    I got a BSOD today after getting home from 11 hours of work... It seems to work in SAFE MODE (sometimes, not always) The BSOD says: SYSTEM_SERVICE_EXCEPTION STOP: 0x0000003b WimFsf.sys - Address FFFFF88001A6B76B base at FFFFF88001A600000, DateStamp 4a5bc362. Sometimes it shows the welcome screen, shows the destkop for a few seconds and then BSODs Already did: unplugged all the USB devices reset the CMOS I haven't installed any new software recently. What should I do? EDIT: I've managed to get into safe mode and it seems to works fine. When I go into normal mode it shows the desktop then it freezes... Is it more likely to be a software or hardware problem? EDIT2: I've managed to get into normal mode by disabling all the non-microsoft services & startup programs. One more thing: When I shut down my PC, on the "Logging off" screen appear some artifacts. I don't think it's because of my graphics card cause I've opened Black OPS and it worked fine. It's so strange... It still BSODS on startup (but there a ~=10% that it will not), and when it doesn't BSOD's it works fine...

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • How to install 64bit version of Mongodb

    - by slownage
    How can I install the 64bit (x86_64) version of MongoDB? I've specified in the 10gen.repo the 64bit: baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 But when I run: yum install mongo-10gen mongo-10gen-server It's the 32bit (see the i686) that it's set to be installed. Failed to set locale, defaulting to C Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.fdcservers.net * epel: mirror.steadfast.net * extras: mirror.fdcservers.net * rpmforge: mirror.rit.edu * updates: mirror.fdcservers.net 10gen | 951 B 00:00 Not using downloaded repomd.xml because it is older than what we have: Current : Tue Oct 30 15:55:02 2012 Downloaded: Tue Oct 30 15:54:51 2012 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package mongo-10gen.i686 0:2.2.1-mongodb_1 will be installed ---> Package mongo-10gen-server.i686 0:2.2.1-mongodb_1 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Installing: mongo-10gen i686 2.2.1-mongodb_1 10gen 42 M mongo-10gen-server i686 2.2.1-mongodb_1 10gen 6.5 M Transaction Summary ====================================================================================================================================================== Install 2 Package(s) Total download size: 48 M Installed size: 118 M I think I know why it want's to install the 32bit version: the first time I've made the 10gen.repo file I had in there the 32bit link specified, and installed the 32bit, which later I've deleted. Maybe something has been cached. Could someone help me out with this.

    Read the article

  • Asus PCE-N53 11n N600 PCI-E Adapter on 3.x kernel

    - by CITguy
    Problem ASUS PCE-N53 wireless NIC doesn't work for latest versions of the linux kernel. How do I get it working on my system? (Note: I'm posting the answer I've found for others to use.) Installing Driver for Linux 3.x Kernel ASUS provides Linux drivers from their website, but it mentions that the driver supports "Linux Kernel 2.6.x", so it won't work without a some modifications to the driver code. Fortunately, an archlinux forum mentions similar problems and one user was able to create a patch for kernel 3.8.x that seems to work with kernel 3.11.x. Here's how I got it working: Prerequisites Ubuntu: sudo apt-get install build-essential Arch: sudo pacman -S base-devel Steps: 1. Download the driver from the ASUS website The download can be found under "Support Drivers & Tools". 2. Unzip the contents of the downloaded file cd into the new directory 3. Patch The arch forum mentions a 3.8 patch file that needs to be downloaded. Download rt5592sta_fix_64bit_3.8.patch to the current directory. tar -xvf {driver_source.tar.gz} cd into the directory created in previous step patch -p1 < ../rt5592sta_fix_64bit_3.8.patch 4. Compile NOTE: You will need to use sudo for it to compile properly. sudo make sudo make install sudo modprobe rt5592sta 5. Enjoy If all is well, you should now have a working card.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • linux routing issue

    - by Duc To
    Hi! I have 2 linksys routers which has linux running on it and using tomato firmware.. both has internet lines plugged on but only 1 acts as DHCP server (router 1) What I am having to achieve is that all packets goes to router 1 from internal IPs want to access internet will go out to that internet line but from 1 specific port, if router 1 detects packets from a specific source port (for ex: http port: 80), it will redirect that packet to router 2 and goes out to the internet from there.. I have found some documents which give solution that I will need a linux servers with 2 ethernet cards and then we plug both internet lines on that server and routing base on it but I do not want to do that because my boss does not want to have an extra work mantaining that server, besides, he says that the router itself already a linux one so why.. I tend to agree his points.. Can it be done or a seperate linux server acting as a router is a must? Thank you all in advance and really look forward in your replies.. I am newbie to linux network and it seems to be something out of my capacity to solve :( Your sincerely! Duc To

    Read the article

  • SFTP, ChrootDirectory and multiple users

    - by mdo
    I need a setup where I can put the contents of several user folders to a DMZ server from where external clients can download it, protocol SFTP, Linux, OpenSSH. To ease administration we want to use one single user for the upload. What does work is to define ChrootDirectory /home/sftp/ in sshd_config, set the according ownership and modes and define a home dir in passwd so that the working directory of the user fits. This is my structure: /home/sftp/uploader/user1/file1.txt /user2/file2.txt The uploader user can write file1.txt and file2.txt to the corresponding folders and by having the user folders (user1, user2) set to the users' primary group + setting SETGUID on the folders the users are able to even delete the files (which is necessary). Only problem: because /home/sftp/ is the chroot base dir the users can change updir and see other users' folders, though not being able to change into because of access rights. Requirement: We want to prevent users to change to /home/sftp/uploader/ and see other users' folders. My requirements are to use SFTP, have one upload user and every user must have write access to his home dir. Obviously it's not an option to use something like ChrootDirectory %h because every path component of the chroot path needs to have limited access rights, so as far as I understand this does not work.

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >