Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 165/196 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Zend Framework: Navigation XML and duplicate page elements

    - by jakenoble
    Hi In XML I'd normal expect the following to be perfectly valid and navigable in a meaningful way using something like PHP's DomDocument: <?xml version="1.0" encoding="UTF-8"?> <configdata> <page> <name>Home</name> </page> <page> <name>Log in</name> </page> </configdata> This is not the case when using Zend_Navigation. Each <page> element needs to have a unique name, so you would need to do: <?xml version="1.0" encoding="UTF-8"?> <configdata> <page_home> <name>Home</name> </page_home> <page_log_in> <name>Log in</name> </page_log_in> </configdata> This works, but is very annoying. I'd much rather have multiple page elements which can have the same name and can be easily copy and pasted when creating many pages for navigation. Why does each one need a unique name? Is there a way of not having to have a unique name?

    Read the article

  • Strange PHP array behavior overwriting values with all the same values

    - by dasdas
    Im doing a simple mysqli query with code ive used many times before but have never had this problem happen to me. I am grabbing an entire table with an unknown number of columns (it changes often so i dont know the exact value, nor the column names). I have some code that uses metadata to grab everything and stick it in an array. This all works fine, but the output is messed up: $stmt -> execute(); //the query is legit, no problems there $meta = $stmt->result_metadata(); while ($field = $meta->fetch_field()) { $params[] = &$row[$field->name]; } call_user_func_array(array($stmt, 'bind_result'), $params); while ($stmt->fetch()) { $pvalues[++$i] = $row; //$pvalues is an array of arrays. row is an array //print_r($row); print_r($pvalues[$i-1]); } $stmt -> close(); I would assume that $pvalues has the results that I am looking for. My table currently has 2 rows. $pvalues has array length 2. Both rows in $pvalues are exactly the same. If i use the: print_r($row) it prints out the correct values for both rows, but if later on i check what is in $pvalues it is incorrect (1 row is assigned to both indices of $pvalues). If i use the print_r($pvalues[$i-1]) it prints exactly as I expect, the same row in the table twice. Why isnt the data getting assigned to $pvalues? I know $row holds the right information at one point, but it is getting overwritten or lost.

    Read the article

  • How to detect if a RGB is fully transparent?

    - by omega
    In java, I want to make a fully transparent RGBA, and I do that by using public static int getTransparentRGB() { int r = 0; int g = 0; int b = 0; int a = 0; int new_pixel = (a << 24) | (r << 16) | (g << 8) | b; return new_pixel; } Color color = new Color(getTransparentRGB()); System.out.println(color.getAlpha()); // -> 255 ?! I purposely keep all rgba values 0. However after I create the Color object with the rgba value as the constructor, if I call .getAlpha(), I get 255 even though I made the rgb value with a 0 alpha. If it returns 255, how could I tell the difference between a Color object that wasn't transparent, because that would also have a 255 alpha. I expect the color object to return a 0 alpha based on the function above. Does anyone know whats going on? Thanks

    Read the article

  • mcsCustomscrollbar append new content not working

    - by Dariel Pratama
    i have this script in my website. $(document).ready(function(){ var comment = $('textarea[name=comment_text]').val(); var vid = $('input[name=video_id]').val(); $('#add_comment').on('click', function(){ $.post('<?php echo site_url('comment/addcomments'); ?>', $('#comment-form').serialize(), function(data){ data = JSON.parse(data); if(data.userdata){ var date = new Date(data.userdata.comment_create_time * 1000); var picture = data.userdata.user_image.length > 0 ? 'fileupload/'+data.userdata.user_image:'images/no_pic.gif'; var newComment = '<div class="row">\ <div class="col-md-4">\ <img src="<?php echo base_url(); ?>'+picture+'" class="profile-pic margin-top-15" />\ </div>\ <div class="col-md-8 no-pade">\ <p id="comment-user" class="margin-top-15">'+data.userdata.user_firstname+' '+data.userdata.user_lastname+'</p>\ <p id="comment-time">'+date.getTime()+'</p>\ </div>\ <div class="clearfix"></div>\ <div class="col-md-12 margin-top-15" id="comment-text">'+data.userdata.comment_text +'</div>\ <div class="col-md-12">\ <div class="hr-grey margin-top-15"></div>\ </div>\ </div>'; $('#comment-scroll').append($(newComment)); $('#comment').modal('hide'); } }); }); }); what i expect when a comment added to the DB and the PHP page give JSON response, the new comment will be added to the last line of $('#comment-scroll'). #comment-scroll is also have custom scroll bar by mcsCustomscrollbar. the above script also hiding the modal dialog when comment saved and it's working fine which is mean data.userdata is not empty, but why the append() isnt?

    Read the article

  • Template function overloading with identical signatures, why does this work?

    - by user1843978
    Minimal program: #include <stdio.h> #include <type_traits> template<typename S, typename T> int foo(typename T::type s) { return 1; } template<typename S, typename T> int foo(S s) { return 2; } int main(int argc, char* argv[]) { int x = 3; printf("%d\n", foo<int, std::enable_if<true, int>>(x)); return 0; } output: 1 Why doesn't this give a compile error? When the template code is generated, wouldn't the functions int foo(typename T::type search) and int foo(S& search) have the same signature? If you change the template function signatures a little bit, it still works (as I would expect given the example above): template<typename S, typename T> void foo(typename T::type s) { printf("a\n"); } template<typename S, typename T> void foo(S s) { printf("b\n"); } Yet this doesn't and yet the only difference is that one has an int signature and the other is defined by the first template parameter. template<typename T> void foo(typename T::type s) { printf("a\n"); } template<typename T> void foo(int s) { printf("b\n"); } I'm using code similar to this for a project I'm working on and I'm afraid that there's a subtly to the language that I'm not understanding that will cause some undefined behavior in certain cases. I should also mention that it does compile on both Clang and in VS11 so I don't think it's just a compiler bug.

    Read the article

  • Is there a way to update the height of a single UITableViewCell, without recalculating the height for every cell?

    - by Chris Vasselli
    I have a UITableView with a few different sections. One section contains cells that will resize as a user types text into a UITextView. Another section contains cells that render HTML content, for which calculating the height is relatively expensive. Right now when the user types into the UITextView, in order to get the table view to update the height of the cell, I call [self.tableView beginUpdates]; [self.tableView endUpdates]; However, this causes the table to recalculate the height of every cell in the table, when I really only need to update the single cell that was typed into. Not only that, but instead of recalculating the estimated height using tableView:estimatedHeightForRowAtIndexPath:, it calls tableView:heightForRowAtIndexPath: for every cell, even those not being displayed. Is there any way to ask the table view to update just the height of a single cell, without doing all of this unnecessary work? Update I'm still looking for a solution to this. As suggested, I've tried using reloadRowsAtIndexPaths:, but it doesn't look like this will work. Calling reloadRowsAtIndexPaths: with even a single row will still cause heightForRowAtIndexPath: to be called for every row, even though cellForRowAtIndexPath: will only be called for the row you requested. In fact, it looks like any time a row is inserted, deleted, or reloaded, heightForRowAtIndexPath: is called for every row in the table cell. I've also tried putting code in willDisplayCell:forRowAtIndexPath: to calculate the height just before a cell is going to appear. In order for this to work, I would need to force the table view to re-request the height for the row after I do the calculation. Unfortunately, calling [self.tableView beginUpdates]; [self.tableView endUpdates]; from willDisplayCell:forRowAtIndexPath: causes an index out of bounds exception deep in UITableView's internal code. I guess they don't expect us to do this. I can't help but feel like it's a bug in the SDK that in response to [self.tableView endUpdates] it doesn't call estimatedHeightForRowAtIndexPath: for cells that aren't visible, but I'm still trying to find some kind of workaround. Any help is appreciated.

    Read the article

  • What would be a good Database strategy to manage these two product options?

    - by bemused
    I have a site that allows users to purchase "items" (imagine it as an Advertisement, or a download). There are 2 ways to purchase. Either a subscription, 70 items within 1 month (use them or lose them--at the end of the month your count is 0) or purchase each item individually as you need it. So the user could subscribe and get 70/month or pay for 10 and use them when they want until the 10 are gone. Maybe it's the late hour, but I can't isolate a solution I like and thought some users here would surely have stumbled upon something similar. One I can imagine is webhosts. They sell hosting for monthy fees and sell counts of things like you get 5 free domains with our reseller account. or something like a movie download site, you can subscribe and get 100 movies each month, or pay for a one-time package of 10 movies. so is this a web of tables and where would be a good cross between the product a user has purchased and how many they have left? products productID, productType=subscription, consumable, subscription&consumable subscriptions SubscriptionID, subscriptionStartDate, subscriptionEndDate, consumables consumableID, consumableName UserProducts userID,productID,productType ,consumptionLimit,consumedCount (if subscription check against dates), otherwise just check that consumedCount is < than limit. Usually I can layout my data in a way that I know it will work the way I expect, but this one feels a little questionable to me. Like there is a hidden detail that is going to creep up later. That's why I decided to ask for help if someone in the vast expanse can enlighten me with their wisdom and experience and clue me in to a satisfying strategy. Thank you.

    Read the article

  • How do I trust a self signed cert using https?

    - by dave
    Edit: I originally thought the server's certificate was self signed. Turns out it was signed by a self-signed CA certificate. I'm trying to write a Node.js application that accesses an HTTPS site that's protected using a self-signed certificate certificate signed by a private, self-signed CA certificate. I'd also like to not completely disable certificate checking. I tried putting the self signed certificate server's certificate in the request options, but that doesn't seem to be working. Anyone know how to do this? I expect the following code to print statusCode 200, but instead it prints [Error: SELF_SIGNED_CERT_IN_CHAIN]. I've tried similar code with request with the same results. var https = require('https'); var fs = require('fs'); var opts = { hostname: host, port: 443, path: '/', method: 'GET', ca: fs.readFileSync(serverCertificateFile, 'utf-8') }; var req = https.request(opts, function (res) { console.log('statusCode', res.statusCode); }); req.end(); req.on('error', function (err) { console.error(err); });

    Read the article

  • Detecting modifier keys held down during startup in OS/X (or Windows)?

    - by Tom Swirly
    I've searched here and not found any question that really covers this. I have a cross-platform Windows-OS/X application in which I'd like to be able to detect whether modifier keys like shift or control are being held down while the application starts up. We'd like to do this to allow the application to start up without reading its preferences file, in case that somehow gets corrupted (we saw in testing a prefs bug, now fixed, that made the window size 0 by 0, for example). We're using the excellent and comprehensive cross-platform C++ library named Juce. Unfortunately, Juce's master tells me that he believes this is impossible on OS/X at least since you only get keyboard events and there is no way to read the state of the keys unless something changes. Is this true? Or is there some way around this? I'm almost sure I've used Mac programs that used this mechanism to bypass their preferences. Or... stepping up one level... is there another solution to providing the functionality of "run the program but don't read the prefs file" other than "holding a key down while launching the program"? This is consumer software so we can't expect too much from the user. The final solution will end up being a cross-platform one so hints on the Windows side will also be appreciated. Thanks, and be well! I'll report in with progress on my end.

    Read the article

  • CSS 100% height with padding/margin

    - by Toji
    This has been driving me crazy for a couple of days now, but in reality it's a problem that I've hit off and on for the last few years: With HTML/CSS how can I make an element that has a width and/or height that is 100% of it's parent element and still has proper padding or margins? By "proper" I mean that if my parent element is 200px tall and I specify 100% height with 5px padding I would expect that I should get a 190px high element with 5px "border" on all sides, nicely centered in the parent element. Now, I know that that's not how the standard box model specifies it should work (although I'd like to know why, exactly...), so the obvious answer doesn't work: #myDiv { width: 100% height: 100%; padding: 5px; } But it would seem to me that there must be SOME way of reliably producing this effect for a parent of arbitrary size. Does anyone know of a way of accomplishing this (seemingly simple) task? Oh, and for the record I'm not terribly interested in IE compatibility so that should (hopefully) make things a bit easier. EDIT: Since an example was asked for, here's the simplest one I can think of: <html style="height: 100%"> <body style="height: 100%"> <div style="background-color: black; height: 100%; padding: 25px"></div> </body> </html> The challenge is then to get the black box to show up with a 25 pixel padding on all edges without the page growing big enough to require scrollbars.

    Read the article

  • Ideal backup appliance for backup software like Bacula?

    - by Ricket
    I'm at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). The backup is done only on the servers' hard drives, because our client computers and employees make sure not to store anything worthwhile on their own computers. So it's a pretty simple situation. Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? And should I consider switching to tape drives? What other concerns should I be thinking about when I pick out hardware for a backup server? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • pfSense 2.1 OpenVPN client not using tunnelled interface

    - by Brian M. Hunt
    I'm having some trouble getting OpenVPN working on my pfSense box. The issue is quite strange to me. When I have the OpenVPN turned on, only my router is able to connect to the Internet. From the router I can use ping, links, etc., and connections work exactly as expected - through the VPN, with the IP address assigned by my VPN provider (Proxy.sh, incidentally). However, none of the clients on the local network can connect to the Internet. I get timeouts when using ping or a web browser. I can ping my router, and the IP address of the gateway. When I switch the default gateway from the VPN to my ISP's gateway, all works exactly as expected. Here the routing table (netstat -r) when in VPN mode, and a key for it: IPv4 Destination Gateway Flags Refs Use Mtu Netif Expire 0.0.0.0/1 10.XX.X.53 UGS 0 122 1500 ovpnc1 = default 10.XX.X.53 UGS 0 235 1500 ovpnc1 8.8.8.8 10.XX.X.53 UGHS 0 82 1500 ovpnc1 10.XX.X.1/32 10.11.0.53 UGS 0 0 1500 ovpnc1 10.XX.X.53 link#12 UH 0 0 1500 ovpnc1 10.XX.X.54 link#12 UHS 0 0 16384 lo0 ZZ.XX.XXX.0/20 link#1 U 0 83 1500 re0 ZZ.XX.XXX.XXX link#1 UHS 0 0 16384 lo0 127.0.0.1 link#9 UH 0 12 16384 lo0 128.0.0.0/1 10.11.0.53 UGS 0 123 1500 ovpnc1 192.168.1.0/24 link#11 U 0 1434 1500 ue0 192.168.1.1 link#11 UHS 0 0 16384 lo0 YYY.YYY.YYY.YYY/32 ZZ.XX.XXX.1 UGS 0 249 1500 re0 IP addresses 10.XX.X.53/54 - My DHCP-assigned IP address/pair from the VPN provider ZZ.XX.XXX.XXX - My external IP assigned by my ISP YYY.YYY.YYY.YYY - The external IP assigned by the VPN provider Interfaces ovpnc1 - My VPN client interface re0 - My LAN interface ue0 - My WAN interface This looks essentially what I would expect it to be. The default route is through the VPN provider. The VPN address is routed through the ISP-assigned IP address. I am not sure what would be wrong here. So figuring this was a firewall issue, I basically tried enabling all in/out traffic. This did not seem to remedy the problem. Also figuring it could possibly be some client networking issue, I restarted the clients on the LAN. This did not help. I also ran route flush and reset the routes manually. So I am a bit stumped, and would be very grateful for any thoughts on what the problem might be.

    Read the article

  • Sendmail relay authentication

    - by Pawel Veselov
    I'm trying to set up my sendmail to authenticate against a relay (comcast). I'm not seeing any attempts to authenticate at all. I'm trying to just debug how authentication works, and can't connect all the pieces... I have, in my .mc file: define(`RELAY_MAILER_ARGS', `TCP $h 587')dnl define(`SMART_HOST', `relay:smtp.comcast.net.')dnl define(`confAUTH_MECHANISMS', `PLAIN')dnl FEATURE(`authinfo',`hash /etc/mail/client-info')dnl And in my /etc/mail/client-info: AuthInfo:*.comcast.net "U:root" "I:comcast_user" "P:comcast_password" Now, I know everything is fine with the u/p, as I could authenticate directly through SMTP, using telnet. There are two things I don't understand. When AuthInfo records are searched for, they are matched by the target hostname. How? Does it it use the map key (something I would expect), or uses the so-called "Domain" ("R:" parameter that I don't set in my auth-info line) What is "U:", really? Sendmail README (http://www.sendmail.org/m4/smtp_auth.html) says it's "user(authoraztion id)", and "I:" is "authentication ID". That suggests that my username should be in "U:", actually, but http://www.sendmail.org/~ca/email/auth.html says that "I:" is your remote user name. The session looks like this: [root@manticore]/etc/mail# sendmail -qf -v Warning: Option: AuthMechanisms requires SASL support (-DSASL) Running /var/spool/mqueue/p97CgcWq023273 (sequence 1 of 399) [email protected]... Connecting to smtp.comcast.net. port 587 via relay... 220 omta19.westchester.pa.mail.comcast.net comcast ESMTP server ready >>> EHLO my.host.name 250-omta19.westchester.pa.mail.comcast.net hello [my.ip.add.res], pleased to meet you 250-HELP 250-AUTH LOGIN PLAIN 250-SIZE 15728640 250-ENHANCEDSTATUSCODES 250-8BITMIME 250-STARTTLS 250 OK >>> STARTTLS 220 2.0.0 Ready to start TLS >>> EHLO my.host.name 250-omta19.westchester.pa.mail.comcast.net hello [my.ip.add.res], pleased to meet you 250-HELP 250-AUTH LOGIN PLAIN 250-SIZE 15728640 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 OK >>> MAIL From:<> SIZE=2183 550 5.1.0 Authentication required MAILER-DAEMON... aliased to postmaster postmaster... aliased to root root... aliased to [email protected] postmaster... aliased to root root... aliased to [email protected] >>> RSET 250 2.0.0 OK [root@manticore]/etc/mail# sendmail -d0.1 Version 8.14.3 Compiled with: DNSMAP LOG MAP_REGEX MATCHGECOS MILTER MIME7TO8 MIME8TO7 NAMED_BIND NETINET NETINET6 NETUNIX NEWDB NIS PIPELINING SCANF SOCKETMAP STARTTLS TCPWRAPPERS USERDB XDEBUG Thanks, Pawel.

    Read the article

  • Logrotate not doing any rotation

    - by blizz
    I just set up LogRotate on my RHEL6 server so that it rotates my custom Apache log files. However, it doesn't do anything when i try manually running it. I expect it to rotate the log files "access.log" and "err.log". They have been there for a few days and need to be rotated. Here is the output: [root@pc1 httpd]# logrotate -d -f /etc/logrotate.d/apache reading config file /etc/logrotate.d/apache reading config info for /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log Handling 1 logs rotating pattern: /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log forced from command line (no old logs will be kept) empty log files are rotated, old logs are removed considering log /var/log/httpd/access_log log needs rotating considering log /var/log/httpd/error_log log needs rotating considering log /var/www/html/NSLogs/access.log log needs rotating considering log /var/www/html/NSErrorLogs/err.log log needs rotating rotating log /var/log/httpd/access_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/access_log to /var/log/httpd/access_log-20131023 disposeName will be /var/log/httpd/access_log-20131023.gz running postrotate script running script with arg /var/log/httpd/access_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/access_log-20131023.gz error: error opening /var/log/httpd/access_log-20131023.gz: No such file or directory rotating log /var/log/httpd/error_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/error_log to /var/log/httpd/error_log-20131023 disposeName will be /var/log/httpd/error_log-20131023.gz running postrotate script running script with arg /var/log/httpd/error_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/error_log-20131023.gz error: error opening /var/log/httpd/error_log-20131023.gz: No such file or directory rotating log /var/www/html/NSLogs/access.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSLogs/access.log to /var/www/html/NSLogs/access.log-20131023 disposeName will be /var/www/html/NSLogs/access.log-20131023.gz running postrotate script running script with arg /var/www/html/NSLogs/access.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSLogs/access.log-20131023.gz error: error opening /var/www/html/NSLogs/access.log-20131023.gz: No such file or directory rotating log /var/www/html/NSErrorLogs/err.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSErrorLogs/err.log to /var/www/html/NSErrorLogs/err.log-20131023 disposeName will be /var/www/html/NSErrorLogs/err.log-20131023.gz running postrotate script running script with arg /var/www/html/NSErrorLogs/err.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSErrorLogs/err.log-20131023.gz error: error opening /var/www/html/NSErrorLogs/err.log-20131023.gz: No such file or directory

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • Can't get Monit to work

    - by Andrea
    I am trying to configure Monit on my local machine to get a taste at how it works, but I have some issues. What I am trying to do is to get any evidence that Monit is up and running correctly and is actually monitoring something. So my /etc/monit/monitrc looks like set daemon 60 set logfile /var/log/monit.log set idfile /var/lib/monit/id set statefile /var/lib/monit/state set eventqueue basedir /var/lib/monit/events slots 100 set httpd port 2812 and allow username:password check process apache2 with pidfile /usr/local/apache/logs/apache2.pid start program = "/etc/init.d/apache2 start" stop program = "/etc/init.d/apache2 stop" if failed port 6543 protocol http then exec "/usr/bin/touch /tmp/monit" If I understand correctly, since apache does not listen on port 6543 (it is just a random number) I should get an error, and as a consequence the file /tmp/monit should be created. So I start monit by sudo service monit start sudo monit monitor apache2 Unfortunately no such file is created. Instead the web console shows an error for apache - execution failed. The log says 'apache2' failed to start. What am I doing wrong? EDIT As suggested in the comments, I ran monit in verbose mode, by monit -vv monitor apache2 (the exact command suggested in the comments failed). The output is Runtime constants: Control file = /etc/monit/monitrc Log file = /var/log/monit.log Pid file = /var/run/monit.pid Debug = True Log = True Use syslog = False Is Daemon = True Use process engine = True Poll time = 60 seconds with start delay 0 seconds Expect buffer = 256 bytes Event queue = base directory /var/lib/monit/events with 100 slots Mail from = (not defined) Mail subject = (not defined) Mail message = (not defined) Start monit httpd = True httpd bind address = Any/All httpd portnumber = 2812 httpd signature = True Use ssl encryption = False httpd auth. style = Basic Authentication The service list contains the following entries: Process Name = apache2 Pid file = /usr/local/apache/logs/apache2.pid Monitoring mode = active Start program = '/etc/init.d/apache2 start' timeout 30 second(s) Stop program = '/etc/init.d/apache2 stop' timeout 30 second(s) Existence = if does not exist 1 times within 1 cycle(s) then restart else if succeeded 1 times within 1 cycle(s) then alert Pid = if changed 1 times within 1 cycle(s) then alert Ppid = if changed 1 times within 1 cycle(s) then alert Port = if failed localhost:6543 [HTTP via TCP] with timeout 5 seconds 1 times within 1 cycle(s) then exec '/usr/bin/touch /tmp/prova-monit' timeout 0 cycle(s) else if succeeded 1 times within 1 cycle(s) then alert System Name = system_andrea-Vostro-420-Series Monitoring mode = active

    Read the article

  • How to install 32-bit libraries using Debian Testing

    - by bgoodr
    Question: What is the way to determine, ahead of time and without doing a full install of 64-bit Debian Testing NETINST, when Debian Testing has 32-bit libraries available and fully working and installable so that the following command works without broken package errors?: apt-get install ia32-libs ia32-libs-gtk The errors that occur when 32-bit libraries are not available, still in some broken state, or whatever is broken are detailed below. I already have concluded that "Just install Stable" is my stop-gap measure for now, but I would like to know the answer to the above question so as to avoid a lengthy installation process only to run into these problems at the very end. Details: I downloaded the 64-bit Debian Testing netinst a couple of days ago. This was "Jessie" built 20131014-06:07 via http://tinyurl.com/lejpa. This is weekly testing build. Yes, I know I should expect problems, and I did. I managed to get it completely installed and was able to invoke into GNOME, but not get past the 32-bit library problem. The problems starts when I attempt to install the 32-bit libraries via: apt-get install ia32-libs ia32-libs-gtk that returns: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 but it is not installable ia32-libs-gtk : Depends: ia32-libs-i386 but it is not installable Depends: ia32-libs-gtk-i386 but it is not installable E: Unable to correct problems, you have held broken packages. I then found an old (2012 is old to me) answer at ia32-libs : Depends: ia32-libs-i386 but it is not installable and even tried what they suggested there which was dpkg --add-architecture i386 apt-get update After executing the above, I tried again but got: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 ia32-libs-gtk : Depends: ia32-libs-i386 E: Unable to correct problems, you have held broken packages. root@breath:~# And then tried this: root@breath:~# dpkg --get-selections | grep hold And that returned nothing. Not only is there broken packages, the system doesn't even know what packages are broken, so Debian Stable is my only solution I know of right now. Hence my question above.

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >