Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 157/189 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Changing the prompt in telnet

    - by wim
    With some help from people on here, I was able to set a custom prompt in an ssh session (thanks!). Now I need to do the same in telnet, but I'm not sure of what syntax I could use for that. Basically the telnet prompt is just a > character, I need to modify it to something I can more reliably detect in automation jobs. Hope this makes sense. From inside telnet, trying to escape that command with a bang like !PS1=spam and !PS2=eggs did not change it. wim@wim-acer:~$ ssh [email protected] -i ~/.ssh/guest_nopassphrase -t "export PS1='Sending a custom prompt \w \$ '; exec sh" Sending a custom prompt ~ $ set HOME='/var/tmp' IFS=' ' LOGNAME='guest' PATH='/sbin:/usr/sbin:/bin:/usr/bin' PPID='1128' PS1='Sending a custom prompt \w $ ' PS2='> ' PS4='+ ' PWD='' SHELL='/bin/sh' TERM='xterm' USER='guest' Sending a custom prompt ~ $ telnet localhost <snip> Entering character mode Escape character is '^]'. > !set CONSOLE='/dev/ttyp0' HOME='/var/tmp' IFS=' ' LOGNAME='root' PATH='/sbin:/bin:/usr/sbin:/usr/bin' PPID='546' PREVLEVEL='N' PS1='\w \$ ' PS2='> ' PS4='+ ' PWD='/var/tmp' RESPAWN_COUNT='1' RESPAWN_LAST='0' RESPAWN_MAX='5' RESPAWN_TIME='5' ROOTDEV='/dev/sla1' RUNLEVEL='5' SHELL='/bin/false' TERM='linux' USER='root' > > Connection closed by foreign host Sending a custom prompt ~ $ Connection to 192.168.1.124 closed. wim@wim-acer:~$

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • Access an external SSH server through a restrictive proxy [on hold]

    - by Cyrille
    I'm a software developer. I wish to access my computer at home through SSH. For example, I sometime need to access my personal projects source code to check how I handled specific problems. Unfortunately, I currently work under an over-restrictive and anti-productive proxy that waste a hell of a lot of everyone's time (We often have to visit websites from our smartphones or use a web proxy to check very legitimates websites for answers, and don't get me started on other "security" overkill features we have to cope with...). Well, back to the subject, I can access my home computer from my phone (SSH, port 22 and 80 both redirected by router on port 22). It works, but it's quite uncomfortable. From my office computer, this is what I tried so far: export http_proxy=http://user:pass@proxyip:8080 echo "user:pass" > ~/.corkscrew-auth echo "ProxyCommand corkscrew proxyip 8080 %h %p /home/me/.corkscrew-auth" > ~/.ssh/config ssh 82.23.34.56 -l me -p 80 Proxy could not open connnection to 82.23.34.56: Forbidden ssh_exchange_identification: Connection closed by remote host (same without -p 80) Without corkscrew: ssh: connect to host 82.23.34.56 port 80: Connection timed out ssh: connect to host 82.23.34.56 port 22: Connection timed out Any other idea ?

    Read the article

  • Google Analytics recording event based on <a> title attribute

    - by rlsaj
    I am declaring: var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; and then have the following: else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } However, using Google Analytics debugger this is not being recorded. Any suggestions? The complete function is: if (typeof jQuery != 'undefined') { jQuery(document).ready(function gLinkTracking($) { var filetypes = /\.(avi|csv|dat|dmg|doc.*|exe|flv|gif|jpg|mov|mp3|mp4|msi|pdf|png|ppt.*|rar|swf|txt|wav|wma|wmv|xls.*|zip)$/i; var baseHref = ''; if (jQuery('base').attr('href') != undefined) baseHref = jQuery('base').attr('href'); jQuery('a').on('click', function (event) { var el = jQuery(this); var track = true; var href = (typeof(el.attr('href')) != 'undefined' ) ? el.attr('href') :""; var title = (typeof(el.attr('title')) != 'undefined' ) ? el.attr('title') :""; var isThisDomain = href.match(document.domain.split('.').reverse()[1] + '.' + document.domain.split('.').reverse()[0]); if (!href.match(/^javascript:/i)) { var elEv = []; elEv.value=0, elEv.non_i=false; if (href.match(/^mailto\:/i)) { elEv.category = "Email link"; elEv.action = "click-email"; elEv.label = href.replace(/^mailto\:/i, ''); elEv.loc = href; } else if (title.match(/^"Matching Content"\:/i)) { elEv.category = "Matching Content Click"; elEv.action = "click-Matching-Content"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(filetypes)) { var extension = (/[.]/.exec(href)) ? /[^.]+$/.exec(href) : undefined; elEv.category = "File Downloaded"; elEv.action = "click-" + extension[0]; elEv.label = href.replace(/ /g,"-"); elEv.loc = baseHref + href; } else if (href.match(/^https?\:/i) && !isThisDomain) { elEv.category = "External link"; elEv.action = "click-external"; elEv.label = href.replace(/^https?\:\/\//i, ''); elEv.non_i = true; elEv.loc = href; } else if (href.match(/^tel\:/i)) { elEv.category = "Telephone link"; elEv.action = "click-telephone"; elEv.label = href.replace(/^tel\:/i, ''); elEv.loc = href; } else track = false; if (track) { _gaq.push(['_trackEvent', elEv.category.toLowerCase(), elEv.action.toLowerCase(), elEv.label.toLowerCase(), elEv.value, elEv.non_i]); if ( el.attr('target') == undefined || el.attr('target').toLowerCase() != '_blank') { setTimeout(function() { location.href = elEv.loc; }, 400); return false; } } } }); }); }

    Read the article

  • Exporting only visible datagridview columns to excel

    - by Suresh E
    Need help on exporting only visible DataGridView columns to excel, I have this code for hiding columns in DataGridView. this.dg1.Columns[0].Visible = false; And then I have button click event for exporting to excel. // creating Excel Application Microsoft.Office.Interop.Excel._Application app = new Microsoft.Office.Interop.Excel._Application(); // creating new WorkBook within Excel application Microsoft.Office.Interop.Excel._Workbook workbook = app.Workbooks.Add(Type.Missing); // creating new Excelsheet in workbook Microsoft.Office.Interop.Excel._Worksheet worksheet = null; // see the excel sheet behind the program app.Visible = true; // get the reference of first sheet. By default its name is Sheet1. // store its reference to worksheet worksheet = workbook.Sheets["Sheet1"]; worksheet = workbook.ActiveSheet; // changing the name of active sheet worksheet.Name = "PIN korisnici"; // storing header part in Excel for (int i = 1; i < dg1.Columns.Count + 1; i++) { worksheet.Cells[1, i] = dg1.Columns[i - 1].HeaderText; } // storing Each row and column value to excel sheet for (int i = 0; i < dg1.Rows.Count - 1; i++) { for (int j = 0; j < dg1.Columns.Count; j++) { worksheet.Cells[i + 2, j + 1] = dg1.Rows[i].Cells[j].Value.ToString(); } } but I want to export only visible columns, while I get all of them, anyone, help on this.

    Read the article

  • Getting the EFS Private Key out of system image

    - by thaimin
    I had to recently re-install Windows 7 and I lost my exported private key for EFS. I however have the entirety of my user directory and my figuring that the key must be in there SOMEWHERE. The only question is how to get it out. I did find the PUBLIC keys in AppData\Roaming\Microsoft\SystemCertificates\My\Certificates If I import them using certmg.msc it says I do have the private key in the information, but if I try export them it says I do not have the private key. Also, decryption of files doesn't work. There is also a "keys" folder at AppData\Roaming\Microsoft\SystemCertificates\My\Keys. After importing the certificates I copy those over into my new installation but it has no effect. I am starting to believe they are either in AppData\Roaming\Microsoft\Protect\S-1-5-21-...\ or AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-...\ but I am unsure how to use the files in those folders. Also, since my SID has changed, will I be able to use them? The other parts of the account have remained the same (name and password). I also have complete access to the user registry hive and most of the old system files (including the old system registry hives). I do keep seeing references to "Key Recovery Agent" but have not found anything about using, just that it can be used. Thanks!

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • Prevent 'Run-time error '7' out of memory' error in Excel when using macro

    - by MasterJedi
    I keep getting this error whenever I run a macro in my excel file. Is there any way I can prevent this? My code is below. Debugging highlights the following line as the issue: ActiveSheet.Shapes.SelectAll My macro: Private Sub Save() Dim sh As Worksheet ActiveWorkbook.Sheets("Report").Copy 'Create new workbook with Sheets("Report"(2)) as only sheet. Set sh = ActiveWorkbook.Sheets(1) 'Set the new sheet to a variable. New workbook is now active workbook. sh.Name = sh.Range("B9") & "_" & Format(Date, "mmyyyy") 'Rename the new sheet to B9 value + date. With sh.UsedRange.Cells .Value = .Value 'eliminate all formulas .Validation.Delete 'remove all validation .FormatConditions.Delete 'remove all conditional formatting ActiveSheet.Buttons.Delete ActiveSheet.Shapes.SelectAll Selection.Delete lrow = Range("I" & Rows.Count).End(xlUp).Row 'select rows from bottom up to last containing data in column I Rows(lrow + 1 & ":" & Rows.Count).Delete 'delete rows with no data in column I Application.ScreenUpdating = False .Range("A410:XFD1048576").Delete Shift:=xlUp 'delete all cells outwith report range Application.ScreenUpdating = True Dim counter Dim nameCount nameCount = ActiveWorkbook.Names.Count counter = nameCount Do While counter > 0 ActiveWorkbook.Names(counter).Delete counter = counter - 1 Loop 'remove named ranges from workbook End With ActiveWorkbook.SaveAs "\\Marko\Report\" & sh.Name & ".xlsx" 'Save new workbook using same name as new sheet. ActiveWorkbook.Close False 'Close the new workbook. MsgBox ("Export complete. Choose the next ADP in cell B9 and click 'Calculate'.") 'Display message box to inform user that report has been saved. End Sub Not sure how to make this more efficient or to prevent this error.

    Read the article

  • How do you permanently disable the 'This Connection is Untrusted' page on Firefox

    - by TheIronChef9
    I'm going insane. Can someone please help me to COMPLETELY DISABLE the 'This Connection is Untrusted' page on Firefox. Facts: I am running Firefox 23.0 on an Ubuntu machine (downloaded and installed ubuntu today) It is a work computer and I have to use my employer's proxy While visiting Webpages/webapps like Gmail or Google brings up the 'This Connection is Untrusted' page and I have to go through the whole tedious task of selecting 'I understand the Risks' and add Exceptions, etc. etc. The fact is, I don't care about the risks. I would rather this computer melt into the ground than have to see that page ever again. I want to dance naked in untrusted pages and not give a damn about the consequences. I just never want to see that page again. Ever. For some sites (eg. wikipedia), the css doesn't load and I end up seeing them in plain text. As a result these sites are completely useless. Wasted hours trying to solve this for stackoverflow.com. These issues happen on the Firefox on my Windows XP machine as well (also using the same proxy). I don't want to export/import certificates or create exceptions for every site that shows this bloody page. I just want this page gone. I don't want Firefox to tell me what's safe and what's not. Also, my system time and date are correct. I've also tried the lies on this page too with no good results. Edit: I've also tried the whole going into the Advance-Certificates-validation setup page and unchecked 'Use the Online Certificate Status Protocol (OCSP) to confirm the current validity of certificates' checkbox. Nothing happened even after restarting firefox or rebooting. I need help. Thanks.

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

  • How to control/check CheckPoint rules changes (and another System events)

    - by user35115
    I need to check/control all system events on many CheckPoint FW1 - don't misunderstand - not rules triggering, but events such admins log on, rules changes and etc. I found out that I can make an log export using 2 methods: Grab logs Use special script that redirect Checkpoint log entries to syslog, FW1-Loggrabber But it's not clear for me does such logs also contain information that i need (admins log on, rules changes)? And If yes is it possible to filter events? I also suppose, that if system bases on *nix platform it must be a ploy - use based functions of the system to do what i want. Unfortunately i don't know where to "dig". May be you know? Updated: New info "FW-1 can pipe its logs to syslog via Unix's logger command, and there are third party log-reading utilities" So, the main question is how do my task in the best way? Has anybody already resolved such problem? P.S. I' m new with CheckPoint, so all information will be useful for me. Thank you.

    Read the article

  • Explain why .bash_logout won't run commands?

    - by Droogans
    So I've been wondering how to run these two lines of code everytime I close an open instance of Terminal: history -c cat /dev/null > ~/.bash_history I export HISTFILE=5 on startup, but still want to flush that out when I'm done. I've tried looking around a bit in a couple of places, and haven't had much luck. I run Linux Mint, and would also note here that I ran into a similar issue with .bash_profile; eventually, I discovered I needed to place all start up code in .bashrc, so maybe that has something to do with it. Here's my .bash_logout file: #!/bin/bash # ~/.bash_logout: executed by bash(1) when login shell exits. # when leaving the console clear the screen to increase privacy if [ "$SHLVL" = 1 ]; then history -c cat /dev/null > ~/.bash_history [ -x /usr/bin/clear_console ] && /usr/bin/clear_console -q fi #this does nothing on exit... echo 'logout'; sleep 2s I've tried re-arranging this script many ways, I'm not sure if I don't understand how bash works, and if any of this is running in the first place. Does the fact that I run Xserver make bash consider Terminal something that isn't a log-out on exit?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Powershell Script to help archive company email

    - by sec_goat
    I am attempting to use powershell to gather emails that pertain to a certain subject, so that these correspondences might be turned over to a legal department. I am having a couple of issues here that I would like some assistance getting past. I run the following command: get-mailbox -Database "Mailbox Database" | Export-Mailbox -ContentKeywords "Keywords To Search" -TargetMailbox "sec_goat" -TargetFolder EmailSearch -StartDate "01/13/2011 12:01:00 This has pretty much done what I want, and returned a boat load of emails, however it has also flooded my inbox with hundreds of blank calendars and contact lists. I realize now I should have used the exclusion on these folders, as well as a test environment (which we don't have). 1.How can I clean up this script to not include all the blank folders, contacts and calendars that DO NOT match the keywords search? 2.How do I clean up hundreds of blank contact lists and calendars in my mailbox without right clicking and deleting each one? EDIT: I edited the post to change the scope of the question. I think my focus is less on the legal perspective and more on the "How can I clean up my mess and make future archives less messy and painful?"

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • MkMapView Zoom Level

    - by meetS
    I m using MkMapView with google maps.I succeed to show map view and address with annotation pin.But I want increased zoom level.How Can I Set it programmatically?? (void) showMyAddress { //Hide the keypad MKCoordinateRegion region; MKCoordinateSpan span; span.latitudeDelta=0.2; span.longitudeDelta=0.2; CLLocationCoordinate2D location = [self addressLocation]; region.span=span; region.center=location; if(addAnnotation != nil) { [mapView removeAnnotation:addAnnotation]; [addAnnotation release]; addAnnotation = nil; } addAnnotation = [[AddAnnotation alloc] initWithCoordinate:location ]; [addAnnotation setMTitle:@"abc"]; [addAnnotation setMSubTitle:@"def."] [mapView addAnnotation:addAnnotation]; [mapView setRegion:region animated:TRUE]; [mapView regionThatFits:region]; } -(CLLocationCoordinate2D) addressLocation { NSString *urlString = [NSString stringWithFormat:@"http://maps.google.com/maps/geo?q=%@&output=csv", [@"abc" stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]]; NSString *locationString = [NSString stringWithContentsOfURL:[NSURL URLWithString:urlString] encoding:NSStringEncodingConversionAllowLossy error:nil]; NSArray *listItems = [locationString componentsSeparatedByString:@","]; double latitude = 0.0; double longitude = 0.0; if([listItems count] >= 4 && [[listItems objectAtIndex:0] isEqualToString:@"200"]) { latitude = [[listItems objectAtIndex:2] doubleValue]; longitude = [[listItems objectAtIndex:3] doubleValue]; } else { } CLLocationCoordinate2D location; location.latitude = latitude; location.longitude = longitude; return location; }

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • Empty responseText from XMLHttpRequest

    - by PurplePilot
    I have written an XMLHttpRequest which runs fine but returns an empty responseText. The javascript is as follows: var anUrl = "http://api.xxx.com/rates/csv/rates.txt"; var myRequest = new XMLHttpRequest(); callAjax(anUrl); function callAjax(url) { myRequest.open("GET", url, true); myRequest.onreadystatechange = responseAjax; myRequest.setRequestHeader("Cache-Control", "no-cache"); myRequest.send(null); } function responseAjax() { if(myRequest.readyState == 4) { if(myRequest.status == 200) { result = myRequest.responseText; alert(result); alert("we made it"); } else { alert( " An error has occurred: " + myRequest.statusText); } } } The code runs fine. I can walk through and I get the readyState == 4 and a status == 200 but the responseText is always blank. I am getting a log error (in Safari debug) of Error dispatching: getProperties which I cannot seem to find reference to. I have run the code in Safari and Firefox both locally and on a remote server. The URL when put into a browser will return the string and give a status code of 200. I wrote similar code to the same URL in a Mac Widget which runs fine, but the same code in a browser never returns a result.

    Read the article

  • Impersonation in ASP.NET MVC

    - by eibrahim
    I have an Action that needs to read a file from a secure location, so I have to use impersonation to read the file. This code WORKS: [AcceptVerbs(HttpVerbs.Get)] public ActionResult DirectDownload(Guid id) { if (Impersonator.ImpersonateValidUser()) { try { var path = "path to file"; if (!System.IO.File.Exists(path)) { return View("filenotfound"); } var bytes = System.IO.File.ReadAllBytes(path); return File(bytes, "application/octet-stream", "FileName"); } catch (Exception e) { Log.Exception(e); }finally { Impersonator.UndoImpersonation(); } } return View("filenotfound"); } The only problem with the above code is that I have to read the entire file into memory and I am going to be dealing with VERY large files, so this is not a good solution. But if I replace these 2 lines: var bytes = System.IO.File.ReadAllBytes(path); return File(bytes, "application/octet-stream", "FileName"); with this: return File(path, "application/octet-stream", "FileName"); It does NOT work and I get the error message: Access to the path 'c:\projects\uploads\1\aa2bcbe7-ea99-499d-add8-c1fdac561b0e\Untitled 2.csv' is denied. I guess using the File results with a path, tries to open the file at a later time in the request pipeline when I have already "undone" the impersonation. Remember, the impersonation code works because I can read the file in the bytes array. What I want to do though is stream the file to the client. Any idea how I can work around this? Thanks in advance.

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >