Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 305/414 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • New AD-DC in a new Site is refusing cross-site IPv4 connections

    - by sysadmin1138
    We just added a new Server 2008 (sp2) Domain Controller in a new Site, our first such config. It's over a VPN gateway WAN (10Mbit). Unfortunately it is displaying a strange network symptom. Connections to the SMB ports (TCP/139 and TCP/445) are being actively refused... if the connection is coming in on pure IPv4. If the incoming connection is coming by way of the 6to4 tunnel those connections establish and work just fine. It isn't the Firewall, since this behavior can be replicated with the firewall turned off. Also, it's actually issuing RST packets to connection attempts; something that only happens with a Windows Firewall if there is a service behind a port and the service itself denies access. I doubt it's some firewall device on the wire, since the server this one replaced was running Samba and access to it from our main network functioned just fine. I'm thinking it might have something to do with the Subnet lists in AD Sites & Services, but I'm not sure. We haven't put any IPv6 addresses in there, just v4, and it's the v4 connections that are being denied. Unfortunately, I can't figure this out. We need to be able to talk to this DC from the main campus. Is there some kind of site-based SMB-level filtering going on? I can talk to the DC's on campus just fine, but that's over that v6 tunnel. I don't have access to a regular machine on that remote subnet, which limits my ability to test.

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Nginx try_files or else continue matching against locations?

    - by Yang
    I'm wondering whether this is possible with Nginx: I just added a directory with a bunch of HTML files (foo.html, bar.html) that I'd like to serve with /foo, /bar, etc. If the URL doesn't match up with a file name I'd like to fall back to whatever the next best matching location would be. So I have: # This block is newly added. location ~ ^/([^/]+)$ { default_type text/html; alias /blah/$1.html; } # Our long list of existing subsystems below.... location /subscribe { proxy_pass http://127.0.0.1:5000; } location /upload { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; } location ~ /(data|garbage|blargh).* { proxy_pass http://127.0.0.1:8090; proxy_read_timeout 99999; auth_basic text; auth_basic_user_file /etc/nginx/htpasswd; } .... The problem is that the first regex now eats up the URLs that would've gone to other locations, as per the documented behavior of location. One approach is to maintain the full explicit list of files in the first location block, but this list is quite large and is always changing. Is there a way to check to see if the file exists first, and if not, then continue with what would've been the next-best location match? I took stabs using try_files (including using a @fallback and nesting locations in there) but I don't think it's capable of doing this. However I thought I'd ask here in case I'm missing something. (Or maybe there's another better approach altogether.)

    Read the article

  • Linking to network shares from Sharepoint pages

    - by Russell C
    So the place I work decided to set up a Microsoft Sharepoint 2010 server for task management and I (as the lowly entry-level intern) have been tasked with "figuring it out." One thing that the end users really, really, really, want is the ability to link to network shares (that are readable by anyone who will be using sharepoint) from a Sharepoint web page. In order to do this, I have edited the HTML manually with several lines that look like the following: <a href="file://server/share">Server Share</a> This works (sometimes) but the link reported by Sharepoint is often wrong and editing pages that contain these links will mangle the code such that when I open it, the code no longer looks like what it did when I last hit save (breaking all those links). Obviously this is not sustainable. I've been told by coworkers that "It worked that way at the last place I worked" but I haven't found out how yet. Any ideas on how this would work or am I barking up the wrong tree? None of the knowledge searches I've done shed any light on the sitataion. Thanks for any help! -Russell P.S. It should be noted that the file option in an href tag ONLY works in IE (which is a real bummer since we mostly use Firefox).

    Read the article

  • Mac Share Points automatically authenticate with matching Windows AD credentials from Windows

    - by Ron L
    I recently started administering an OS X server (10.8) that is on the same network as our AD domain. While setting up Mac Share Points, I encountered some odd behavior that I hope someone can explain. For the purposes of this example assume the following: 1) Local User on OS X Server: frank, password: Help.2012 2) AD Domain User: frank, password: Help.2012 3) AD Domain: mycompany 4) OS X Server hostname: macserver (not bound to AD, not running OD) When joined to the domain on a a Win 7 computer and logged in as frank and accessing the shares at \\macserver, it automatically authenticates using frank's OS X credentials (because they are the same). However, if I change frank's OS X password, the standard Windows authentication dialog pops-up preset to use frank's AD domain (my company\frank). However, after entering the new OS X password, it will not authenticate without changing the domain to local (.\frank). Basically, if a user in AD has the same User name and password in OS X, it will authenticate automatically regardless of the domain. If the passwords differ, authenticating to the OS X shares must be done from the local machine. (and slightly off topic - how come an OS X administrator can access the root drives on the Mac server from Windows when accessing the Mac shares even when they aren't shared? In other words, it will show all the shared folders from "File Sharing" plus whatever drives are mounted in OS X)

    Read the article

  • How to get robocopy running in powershell?

    - by Moo MinTroll
    I'm trying to use robocopy inside powershell to mirror some directories on my home machines. Here's my script: param ($configFile) $config = Import-Csv $configFile $what = "/COPYALL /B /SEC/ /MIR" $options = "/R:0 /W:0 /NFL /NDL" $logDir = "C:\Backup\" foreach ($line in $config) { $source = $($line.SourceFolder) $dest = $($line.DestFolder) $logfile = $logDIr $logfile += Split-Path $dest -Leaf $logfile += ".log" robocopy "$source $dest $what $options /LOG:MyLogfile.txt" } The script takes in a csv file with a list of source and destination directories. When I run the script I get these errors: ------------------------------------------------------------------------------- ROBOCOPY :: Robust File Copy for Windows ------------------------------------------------------------------------------- Started : Sat Apr 03 21:26:57 2010 Source : P:\ C:\Backup\Photos \COPYALL \B \SEC\ \MIR \R:0 \W:0 \NFL \NDL \LOG:MyLogfile.txt\ Dest - Files : *.* Options : *.* /COPY:DAT /R:1000000 /W:30 ------------------------------------------------------------------------------ ERROR : No Destination Directory Specified. Simple Usage :: ROBOCOPY source destination /MIR source :: Source Directory (drive:\path or \\server\share\path). destination :: Destination Dir (drive:\path or \\server\share\path). /MIR :: Mirror a complete directory tree. For more usage information run ROBOCOPY /? **** /MIR can DELETE files as well as copy them ! Any idea what I need to do to fix? Thanks, Mark.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • How do I get netcat to accept connections from outside the LAN?

    - by Chris
    I'm using netcat as a backend to shovel data back and forth for a program I'm making. I tested my program on the local network, and once it worked I thought it would be a matter of simply forwarding a port from my router to have my program work over the internet. Alas! This seems not to be the case. If I start netcat listening on port 6666 with: nc -vv -l -p 6666, then go to 127.0.0.1:6666 in a browser, as expected I see a HTTP GET request come through netcat (and my browser sits waiting in vain). If I go to my.external.ip.address:6666, however, nothing comes through at all and the browser displays 'could not connect to my.external.ip.address:6666'. I know that the port is correctly forwarded, as www.canyouseeme.org says port 6666 is open (and when netcat is not listening, that its closed). If I run netcat with -g my.adslmodem's.local.address to set the gateway address, I get the same behavior. Am I using this command line option correctly? Any insight as to what I'm doing wrong?

    Read the article

  • update from debian lenny to squeeze

    - by Daniel
    I'm trying to update from debian lenny to squeeze on my 64bit root server and did the following so far: modifying sources.list apt-get update apt-get upgrade apt-get install linux-image-2.6-amd64 The last step leads to the following error-output: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-2.6-amd64: Depends: linux-image-2.6.32-5-amd64 but it is not going to be installed E: Broken packages UPDATE: here's my sources.list deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free How can I fix that safely? thx

    Read the article

  • tail -f and then exit on matching string

    - by Patrick
    I am trying to configure a startup script which will startup tomcat, monitor the catalina.out for the string "Server startup", and then run another process. I have been trying various combinations of tail -f with grep and awk, but haven't got anything working yet. The main issue I am having seems to be with forcing the tail to die after grep or awk have matched the string. I have simplified to the following test case. test.sh is listed below: #!/bin/sh rm -f child.out ./child.sh > child.out & tail -f child.out | grep -q B child.sh is listed below: #!/bin/sh echo A sleep 20 echo B echo C sleep 40 echo D The behavior I am seeing is that grep exits after 20 seconds , however the tail will take a further 40 seconds to die. I understand why this is happening - tail will only notice that the pipe is gone when it writes to it which only happens when data gets appended to the file. This is compounded by the fact that tail is to be buffering the data and outputting the B and C characters as a single write (I confirmed this by strace). I have attempted to fix that with solutions I found elsewhere, such as using unbuffer command, but that didn't help. Anybody got any ideas for how to get this working how I expect it? Or ideas for waiting for successful Tomcat start (thinking about waiting for a TCP port to know it has started, but suspect that will become more complex that what I am trying to do now). I have managed to get it working with awk doing a "killall tail" on match, but I am not happy with that solution. Note I am trying to get this to work on RHEL4.

    Read the article

  • Can't connect to MS SQL Server database using SSMS

    - by Charles
    I have a database on line with Godaddy (who uses SQL Server 2005). They provide basic management tools, but tell you that for more advanced tools you can connect directly using SSMS. I followed their instructions to ensure my online database will accept remote connections, and can apparently log in using SSMS with success (after giving my hostname and access data). However: Now from in SSMS, when attempting to expand the "Databases" folder tree, I get the following error: Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) The server principal "cmitchell" is not able to access the database "3pointdb" under the current security context. (Microsoft SQL Server, Error: 916) The irony is that 3pointdb isn't my database. It is just another in a long list of databases that show up when I access my Godaddy backend. From SSMS, I selected the default database to be the name of my database, which it did locate on the list when I browsed. Still same error message. It is trying to connect to a database that isn't mine! :( Godaddy support, after a bit of testing, said the problem isn't on their end. it's on mine. – Charles

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • What is preventing my computer from going idle?

    - by brianberns
    When I first boot my Windows 7 computer, it will go idle if I stop using it - first the screensaver comes on, then the computer goes to sleep after a certain amount of time. This is the expected behavior. However, after I've used the computer for awhile without rebooting (after about a day or so), I've noticed that it stops going idle - the screensaver won't come on, and the computer won't sleep, no matter how long it sits unused. I've confirmed that the idle timer is increasing as expected via GetLastInputInfo. However, it looks like something is interfering with the results from CallNtPowerInformation. Every 14 or 16 seconds, the TimeRemaining value jumps back up to its maximum value when I query SystemPowerInformation. I've used the SysInternals Process Monitor to detect any unusual events that might be happening to trigger this reset, but come up empty. Does anyone know exactly what are the possible causes of TimeRemaining resetting to its maximum value? I'm fairly sure that it's not my mouse, keyboard, or network sending spurious events, because I've disabled each one and the problem continues to occur. This would also reset the GetLastInputInfo timer, which is not happening. I'm looking for something that affects SystemPowerInformation TimeRemaining, but does not affect GetLastInputInfo. Thanks.

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

  • IIS6 intranet site using integrated authentication fails to load when accessed externally

    - by maik
    I've developed a couple of internal sites for my organization that use integrated authentication. Ultimately we want these sites to be accessible externally to users with domain-joined computers. The sites work as expected on domain computers while on the internal network. The problem comes when I take my laptop home and try to access those sites. IIS only has integrated authentication enabled for the two sites. When I browse to the site using IE8 I get a username/password prompt asking for domain credentials. I can put those in and it will work, but the goal is to use the cached token for integrated authentication. Next I reasoned that IE wouldn't response to an integrated auth request (is NTLM the right term for this?) unless the site was trusted. I tried adding the site to Trusted Sites but I get the same behavior as the before. I then added the site to Local Intranet sites and that is where things get weird. I get a generic error page from IE, no error code or anything. Just for funsies I loaded up Firefox (which I had previously set up to use integrated authentication) and I added this new site to network.automatic-ntlm-auth.trusted-uris. Much to my surprise I was able to load the pages up with no problem at all and saw exactly what I was expecting (including verification that the integrated authentication worked). My mind is a bit boggled at the moment as I'm not really sure where to go from here. I was hoping some of you may be able to provide some insight.

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Unable to install mysql-server in Ubuntu

    - by Arihant
    I am unable to install mysql-server on my ubuntu 9.10 server machine. When using apt-get install mysql-server the output is : # apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 120 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up mysql-server-5.1 (5.1.37-1ubuntu5.4) ... * Stopping MySQL database server Mysqld [ OK ] * Starting MySQL database server mysqld [fail] invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.1 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.1; however: Package mysql-server-5.1 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.1 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I cant find a satisfactory solution to this problem anywhere. Many sites tell to reinstall it but its not working. Any help will be appreciated. Thank you..

    Read the article

  • Unsigned lenny packages with aptitude safe-upgrade

    - by Liam
    I have several Debian lenny computers. Two have nearly identical sources.list files. On both, I do regular update/safe-upgrades. On one it always goes smoothly. On the other, much of the time I get the following: sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages will be upgraded: krb5-clients krb5-ftpd krb5-rsh-server krb5-telnetd krb5-user libimlib2 libkadm55 libkrb53 libpng12-0 libpulse0 xpdf xpdf-common xpdf-reader 13 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2906kB of archives. After unpacking 36.9kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. krb5-rsh-server krb5-user krb5-ftpd krb5-clients libkrb53 xpdf-reader libpng12-0 libkadm55 xpdf libpulse0 libimlib2 krb5-telnetd xpdf-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": no Abort. Needless to say, I don't proceed. What is going on? How do I fix it? These are the non-comment lines in the sources.list for this computer: deb ftp://ftp.debian.org/debian/ lenny main contrib non-free deb-src ftp://ftp.debian.org/debian/ lenny main contrib deb http://security.debian.org/ lenny/updates main contrib non-free Thank you.

    Read the article

  • What is the difference between Startup programs in windows and the same programs being started manually

    - by sup
    I am no Windows guy, but I am trying to get a seamless integration of Windows program through Virtual Box Windows guest onto my Ubuntu machine. I more or less followed this tutorial: https://nowhere.dk/articles/running-windows-applications-natively-with-seamlessrdp Basically I start up Windows in Virtual Box and then I try to launch an application (on Ubuntu host) like this: rdesktop -A -s "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe notepad.exe" 192.168.123.103:3389 -u user -p password That just gives me full Windows desktop that I do not want. However, when I run (on the Windows guest) "c:\Program Files\ThinLinc\WTSTools\seamlessrdpshell.exe" "notepad" The command above works and I get just the window I want. Now, so I thought I would put this command into startup folder of the Windows machine and everything would be fine. But it says "Unable to set up the virtual channel". (by googling, I nailed it to this file: https://sourceforge.net/p/rdesktop/code/1686/tree/seamlessrdp/trunk/ServerExe/vchannel.c - the warning is triggered (by main.c in the same directory) when function vchannel_open() returns something that C interprets as yes for if condition). I have no idea why it works when I launch this command manually via a bat file and not when I put it to startup programs. Any ideas?

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

  • Linux Scheduler (not using all cores on multi-core machine) RHEL6

    - by User512
    I'm seeing strange behavior on one of my servers (running RHEL 6). There seems to be something wrong with the scheduler. Here's the test program I'm using: #include <stdio.h> #include <unistd.h> #include <stdlib.h> void RunClient(int i) { printf("Starting client %d\n", i); while (true) { } } int main(int argc, char** argv) { for (int i = 0; i < 4; ++i) { pid_t p_id = fork(); if (p_id == -1) { perror("fork"); } else if (p_id == 0) { RunClient(i); exit(0); } } return 0; } This machine has a lot more than 4 cores so we'd expect all processes to be running at 100%. When I check on top, the cpu usage varies. Sometimes it's split (100%, 33%, 33%, 33%), other times it's split (100%, 100%, 50%, 50%). When I try this test on another server of ours (running RHEL 5), there are no issues (it's 100%, 100%, 100%, 100%) as expected. What's causing this and how can I fix it? Thanks

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >