Search Results

Search found 22625 results on 905 pages for 'must do better'.

Page 445/905 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Looking for Unix tool/script that, given an input path, will compress every batch of uncompressed 100MB text files into a single gzip file

    - by newToFlume
    I have a dump of thousands of small text files (1-5MB) large, each containing lines of text. I need to "batch" them up, so that each batch is of a fixed size - say 100MB, and compress that batch. Now that batch could be: A single file that is just a 'cat' of the contents of the individual text files, or Just the individual text files themselves Caveats: unix split -b will not work here as I need to keep lines of text intact. Using the lines option is a bit complicated as there is a large variance in the number of bytes in each line. The files need not be a fixed size strictly, as long as it's within 5% of the requested size The lines are critical, and should not be lost: I need to confirm that the input made its way to output without loss - what rolling checksum (something like CRC32, BUT better/"stronger" in face of collisions) A script should do nicely, but this seems like a task someone has done before, and it would be nice to see some code (preferably python or ruby) that does atleast something similar.

    Read the article

  • Refreshing Windows Media library by command line

    - by dangowans
    Many file download managers allow you to run a command after your download finishes. Is there a command line to run a Windows Media Player 12 library refresh? Videos don't show up in the available list on my PS3 until the library is refreshed. Right now, I manually open Windows Media Player after the downloads finish, watch the bottom-right corner for the refresh to complete (ie. Update Complete), then close the player. This works, but there has to be a better way. Yes, I know PS3 Media Server would do the trick, and I do use it when I need to transcode something, but WMP is running all the time, so I'd like to take advantage of it.

    Read the article

  • maxItemsInObjectGraph limit required to be changed for server and client

    - by Michael Freidgeim
    We have a wcf service, that expects to return a huge XML data. It worked ok in testing, but in production it failed with error  "Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."The MSDN article about   dataContractSerializer xml configuration  element  correctly  describes maxItemsInObjectGraph attribute default as 65536, but documentation for of the DataContractSerializer.MaxItemsInObjectGraph property and DataContractJsonSerializer.MaxItemsInObjectGraph Property are talking about Int32.MaxValue, which causes confusion, in particular because Google shows properties articles before configuration articles.When we changed the value in WCF service configuration, it didn't help, because the similar change must be ALSO done on client.There are similar posts:http://stackoverflow.com/questions/6298209/how-to-fix-maxitemsinobjectgraph-error/6298356#6298356You need to set the MaxItemsInObjectGraph on the dataContractSerializer using a behavior on both the client and service. See  for an example.http://devlicio.us/blogs/derik_whittaker/archive/2010/05/04/setting-maxitemsinobjectgraph-for-wcf-there-has-to-be-a-better-way.aspxhttp://stackoverflow.com/questions/2325321/maxitemsinobjectgraph-ignored/4455209#4455209 I had forgot to place this setting in my client app.config file.http://stackoverflow.com/questions/9191167/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-graphttp://stackoverflow.com/questions/5867304/datacontractjsonserializer-and-maxitemsinobjectgraph?rq=1 -It seems that DataContractJsonSerializer.MaxItemsInObjectGraph has actual default 65536, because there is no configuration for JSON serializer, but  it complains about the limit.I believe that MS should clarify the properties documentation re default limit and make more specific error messages to distinguish server side and client side errors.Note, that as a workaround it's possible to use commonBehaviors section which can be defined only in machine.config:<commonBehaviors> <behaviors> <endpointBehaviors> <dataContractSerializer maxItemsInObjectGraph="..." /> </endpointBehaviors> </behaviors></commonBehaviors>v

    Read the article

  • How to quickly open an application for the 2nd time via Dash?

    - by Andre
    When I want to open an application via Dash, I just hit Super, type the first letters, and hit Enter. For instance: Super, "drop", Enter to start Dropbox. However, if I want to start an application again, Dash remembers it, but I cannot start it by hitting ENTER although "drop" is still in there, and Dropbox is in the first position. Why? How can I (without using the mouse) start an application again? UPDATE: better example (hopefully): Super ... type "ged" ... Enter to start Gedit close Gedit Super ... and now? "ged" is remembered, Gedit is still in pole position ready to be started. However, hitting Enter does not work. How can I start an application again? - Without using the mouse or retyping? If I have to retype, it makes no sense that Dash remembers the application and my typed letters. I assume there is a way to open the application again by just: Super + Enter (or something similar). Thanks!

    Read the article

  • Tar and gzip together, but the other way round?

    - by Boldewyn
    Gzipping a tar file as whole is drop dead easy and even implemented as option inside tar. So far, so good. However, from an archiver's point of view, it would be better to tar the gzipped single files. (The rationale behind it is, that data loss is minified, if there is a single corrupt gzipped file, than if your whole tarball is corrupted due to gzip or copy errors.) Has anyone experience with this? Are there drawbacks? Are there more solid/tested solutions for this than find folder -exec gzip '{}' \; tar cf folder.tar folder

    Read the article

  • Dividing with Gnu's bc

    - by Boldewyn
    I'm just starting with Gnu's bc and I'm stuck at the very beginning (very discouraging...). I want to divide two numbers and get a float as result: $bc bc 1.06.94 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 15/12 1 15.0/12.0 1 15.000000/12.000000 1 scale(15.00000) 5 The man page says, that division returns a number with the same scale as the initial values. Obviously this is either not true or I'm missing something. Googling hasn't brought up any new insights (besides that 'BC' can also stand for 'British Columbia'). Do you see my error? Better yet, do you know any good references/tutorials to bc?

    Read the article

  • How to move from Programmer to Project Lead

    - by DoctaStooge
    At my job, I'm currently a programmer, but in the next few weeks I'll be taking control my own project. I was wondering if anyone else here has been in the same situation, and if so, what advice you can offer to help me be able to better run my project. Experience in dealing with contractors would be greatly appreciated. A little more info: Project will have 3 people including myself, with extra people coming in when needing testing. The project has been programmed mainly by 2 people I would like to contribute to the programming as I like doing it and think I can add to the program, but am afraid of how the contractors will react. I don't want to create bad feelings which may harm the project. EDIT: Forgot to mention that I'll have to be picking up communications with customers to make sure their needs are met. Any advice on talking to customers cold would be greatly appreciated. EDIT 2: This is not a new project, I'm picking it up around version 6. Sorry that I didn't make it clear before.

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • how to copy the results from a grep command to the bash clipboard?

    - by avilella
    If I type something in a Linux bash terminal with no X, and then use Ctrl+u, whatever I typed is stored in the bash "clipboard" (for lack of a better term), and I can type it again doing Ctrl+y. How can I copy the results from a grep command on a text file to such bash clipboard? For example, if I have an INSTALL file like this: ./installprocedure --do-some-long-and-complicated-operation-on-dir dir1 How can I copy the content of a grep command so that it's available doing Ctrl+y? For example: copy content to bash clipboard "grep installprocedure INSTALL" Ctrl+y ./installprocedure --do-some-long-and-complicated-operation-on-dir dir1 #cursor available here

    Read the article

  • How to map command in vim that maintains mode when invoked?

    - by Phoenix
    I'm configuring vim in Mac OS X's Terminal app to do useful things with my arrow keys (among others). For example, I want option-left to move the cursor back one word, similarly to how it works in other Mac applications. In normal mode, this is easy enough; I can simply map the sequence to b. But when I'm in insert mode, I want to stay in insert mode (i.e., map the sequence to <c-o>b. In my .vimrc` file, I have these lines: nmap ^[[xol~ b imap ^[[xol~ <c-o>b Where ^[[xol~ is the character sequence that I've configured Terminal to send when I press option-left. This works, but it gets pretty tedious, especially when I've got nearly two dozen commands that I want to map. Is there a better way to do this?

    Read the article

  • How to follow up new technologies and software releases as they come out

    - by Developer
    I don't know if this is right place to ask question, if not please move it to right area please. Is there a website/app/forum/newsletter where I can followup different softwares or technology and feature soon after their new releases. i.e. ipad iphone ios 7 release as soon as it will release with its feature. andriod next version soon after it release with its feature. coreldraw x7 whenever it will rlease zend framework new version when ever it will release. mac php new version photoshop new version ....... there are lot more list than that which goes on and on and Its difficult to follow up new technology as they come out. Any better and less time consuming way to follow up.

    Read the article

  • Spoof MAC address in Windows 7: Bypass

    - by lpd
    I am trying to spoof the MAC address of my new Win7 laptop. To do so I tried specifying an alternate value from the Device Manager which took no effect. I also tried from the registry, as per other threads here, to no avail. Interestingly I also found the registry contained a path 000X\Ndi\params\NetworkAddress\default REG_SZ, but changing that had no effect either :( I can only guess I share the same issue here: http://forums.anandtech.com/showthread.php?t=2096480 as the wireless adaptor is the same brand bundled with the same operating system. So my question is - is there anything better I can do to achieve a spoofed physical address than rollback the drivers to some older version?

    Read the article

  • SNMP - So I have a MIB. Now What?

    - by senfo
    I can't seem to get my head wrapped around the purpose of a MIB. I have a collection of ~20 MIB files that were supplied to me by the vendor, but what do I do with them? I also have a few OID's that were supplied by the vendor that don't seem to be valid. When I issue an "snmpget -v1 -c public 192.168.0.123 .1.4.6.3.2.6.2" (assume that's a valid OID), I get an error indicating the variable is unknown. Does this sound like a hardware configuration problem? Do I need to "load" (for lack of better words) the MIB into the device? Unfortunately, the vendor has been completely unresponsive with returning emails to my questions, so any help would be greatly appreciated.

    Read the article

  • Open Source Highlight: namebench

    - by eddraper
    DNS is a big deal.  Even small incremental changes to improve its performance can yield significant value due to the vast quantity of look-ups required when using the internet.  Until now, It’s always been one of those things I had to kinda take on faith… was my ISP doing a good job?  Are those public DNS server really that much faster?  What about security and privacy concerns? Let me introduce you to namebench.  This is the kinda tool I really love – one that immediately delivers value and is almost over-the-top OCD in its attention to detail. Trust me, this tool is utterly ruthless in it’s quest for getting it right – you’re not left with a big question mark after it presents its data.  The results are conclusive and actionable.  Here’s what is does: It hunts down the fastest DNS servers from your desktop that it can find using thousands of requests.  No, it doesn’t pop up this little dialog in 10 seconds to give you some “off the cuff” answer from a handful of providers.  It takes the better part of 10-15 minutes to run.  When it finishes, it presents you with a veritable horn-o-plenty of data.  Mean response duration, response distribution, bad data,  no stone is left un-turned. Check it out.  You’ll dig it.

    Read the article

  • Full Text Search Strategy For My Website

    - by Hosea146
    I have a website that allows users to search for items in various categories. Each category is a separate area (page) of my website. For example, some categories might be cars, bikes, books etc. At the moment a user has to search for an item by going to the page (for example, cars) and searching for the car they want. I would like to allow the user to search for anything on my site, from my main home page. At the moment, each page (category) has its own set of tables, and I don't really want to turn Full Text Search on for each table (20+ of them) and search each table individually when a search is done. This is going to be slow and tedious. What I'm thinking of doing is creating a single table that will hold all searchable information for each category of item (when an item is saved in its respective table, I would copy all searchable information over to my 'Search' table). I would then turn Full Text Search on for that table, and search that table. Does this sound reasonable? Is there a better way? I've never used Full Text Search before, so this is new to me.

    Read the article

  • HOSTNAME environment variable on Linux

    - by infogrind
    On my Linux box (Gentoo Linux 2.6.31 to be specific) I have noticed that the HOSTNAME environment variable is available in my shell, but not in scripts. For example, $ echo $HOSTNAME returns xxxxxxxx.com, but $ ruby -e 'puts ENV["HOSTNAME"]' returns nil On the other hand, the USER environment variable, for instance, is available both in the shell and in scripts. I have noticed that USER appears in the list of environment variables that appears when I type export i.e., declare -x USER="infogrind" but HOSTNAME doesn't. I suspect the issue has something to do with that. My questions: 1) how can I make HOSTNAME available in scripts, and 2) for my better understanding, where is this variable initially set, and why is it not "exported"?

    Read the article

  • Outlook mail/calendar items give errors after server migration

    - by Mike B
    Last Friday our Exchange server was migrated, by our external system administrator, to a new server, with a new server name. Since then we have problems with the calendar/mail items that were created/sent/received on the old server: Reply to mails get bounced if we use auto complete in the To field. If we cancel auto complete and manually enter the (same) e-mail address then there's no problem. Our system administrator says this is because auto complete fills in the old server name (???). Calendar items created on the old server cannot be edited (without an error) and must be recreated if we want to change them. Our system administrator says these problems are normal with a server migration. I cannot believe this. There must be a better way. Am I right?

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • Password protected traffic meter

    - by UncleBob
    Hi first, I have a small problem for which I haven't found a solution yet. I live in Bosnia and share the Internet connection with the landlady, and as is usual in Bosnia, we do not have a flat rate, but a 15 Giga traffic limite. That would actually be more than enough, if the son of the landlady wouldn't be watching videos all the time, so the bills are truning out rather expensive. I have already installed a traffic monitoring program, but he apparently turns it off as soon as he comes close to his limit and then denies that he consumed any more. I therefore need at least a measurement program that is password protected and / or notes in the log when it's been turned off. Even better would be a program that just cuts his access when he exceeds his share, ie a mixture of Traffic meter and Parental Guard. Can someone help me out here?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Dual displays not working - NVidia - Ubuntu 12.04 - Second Monitor - Two Screens

    - by user75105
    Graphics Card: NVidia 460 GTX. Driver: NVIDIA accelerated graphics driver (version current) I have one DVI monitor, an old Dell LCD from 2005, and one VGA monitor, an Asus ML238H from 2010 whose HDMI port broke. The Asus is plugged into my graphics card's primary monitor slot and is the better monitor even though it is VGA but my computer defaults to the Dell. This happens when I boot as well; the loading screens, the motherboard brand image, etc. are all displayed on the Dell monitor until Windows loads. Then both monitors work. The same thing happened when I booted up Ubuntu 12.04 but I did not see the second monitor when the log-in screen popped up, nor did I when I logged in. I went to System Settings/Displays and my Asus monitor is not an option. I clicked Detect Displays and the Asus is not detected. I looked at the other questions regarding NVIDIA drivers and recalled my problems with Ubuntu a few years ago and decided to check the driver. I went to Additional Drivers to install the proprietary driver and it looks like it's installed and active but I'm still having this problem. There is another driver option, the post-release NVIDIA driver, but that does not fix the problem either. Also, under System Details/Graphics the graphics device is listed as Unknown, which might indicate that it is using an open source generic driver and not the proprietary NVidia driver. But under Additional Drivers it says that I am using the NVidia driver. Any help is appreciated.

    Read the article

  • Workstation Build: Single 2.66ghz i-7 with overclock potential, OR Dual 5520 2.26ghz Xeons?

    - by jdc0589
    There are probably better places to ask this, but I am used to the excellent quality of responses on stack overflow. I am rebuilding my desktop in a few months. Aside from normal lightweight internet usage, I use it to run sqlServer, mySql, 1-2 Ubuntu VMs from time to time, lots of IDE's, and a media server for my PS3. The two possible setups cost the exact same amount (within $50) and would both have 12gb 1333mhz ddr3 ram and a 500gb RAID-0 array (250x2). Now, If I go with a single i-7 920 2.66ghz quadcore, I can easily overclock it to 3ghz, and would have cash leftover to get a 160gb ssd (either the ocz vertex or the 120gb intel) for the main OS/Program install drive. Else, I could get a dual lga1366 motherboard with two e5520 Xeon's (2.26ghz),just use the disks I already have. So, do I go for 8 physical/16 virtual cores at 2.26ghz (No overclocking on server boards) with normal disk I/O, or a 4 physical/8 virtual cores at 3.0ghz with really outstanding disk I/O?

    Read the article

  • iptables: How to create a rule for a single website that does not apply to other websites?

    - by Kris
    Virtual Dedicated Server hosts 10 websites. 1 firewall made with iptables If one of those 10 websites gets hit by too many ping requests coming from one IP address, how do I limit or drop it without dropping it for the other 9 websites? Do I create a firewall for every website ? If so, how? Or is it better to change my rules? If so, how? Thank you. Original question was posted here iptables: what's best practice when there're several websites but you want to use a rule for a single website? but it was too vague. Let me know if more info is needed.

    Read the article

  • How should I compress a file with multiple bytes that are the same with Huffman coding?

    - by Omega
    On my great quest for compressing/decompressing files with a Java implementation of Huffman coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment, I am now at the point of building a list of prefix codes. Such codes are used when decompressing a file. Basically, the code is made of zeroes and ones, that are used to follow a path in a Huffman tree (left or right) for, ultimately, finding a byte. In this Wikipedia image, to reach the character m the prefix code would be 0111 The idea is that when you compress the file, you will basically convert all the bytes of the file into prefix codes instead (they tend to be smaller than 8 bits, so there's some gain). So every time the character m appears in a file (which in binary is actually 1101101), it will be replaced by 0111 (if we used the tree above). Therefore, 1101101110110111011011101101 becomes 0111011101110111 in the compressed file. I'm okay with that. But what if the following happens: In the file to be compressed there exists only one unique byte, say 1101101. There are 1000 of such byte. Technically, the prefix code of such byte would be... none, because there is no path to follow, right? I mean, there is only one unique byte anyway, so the tree has just one node. Therefore, if the prefix code is none, I would not be able to write the prefix code in the compressed file, because, well, there is nothing to write. Which brings this problem: how would I compress/decompress such file if it is impossible to write a prefix code when compressing? (using Huffman coding, due to the school assignment's rules) This tutorial seems to explain a bit better about prefix codes: http://www.cprogramming.com/tutorial/computersciencetheory/huffman.html but doesn't seem to address this issue either.

    Read the article

  • How to trigger chef-client on all nodes from my workstation

    - by divyanshm
    I have 5 nodes and all of them have one setup cook-book in common. Now I would like to add another task in this common cookbook that would configure SQL server for me on all the nodes. Is there a way/command to manually trigger this change across all clients right away? I use azure VM's. All the nodes are Windows Server 2012 machines. I could do a knife winrm machine-name chef-client -m -x username -P password on all the machines, but i'm sure there should be a better way of doing this. I'm new to using chef, so I might be missing a very basic command here.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >