Search Results

Search found 8250 results on 330 pages for 'dunn less'.

Page 252/330 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • HTML + CSS: fixed background image and body width/min-width (including fiddle)

    - by insertusernamehere
    So, here is my problem. I'm kinda stuck at the moment. I have a huge background image and content in the middle with those attributes: content is centered with margin auto and has a fixed width the content is related to the image (like the image is continued within the content) this relation is only horizontally (vertically scrolling moves everything around) This works actually fine (I'm only talking desktop, not mobile here :) ) with a position fixed on the huge background image. The problem that occurs is the following: When I resize the window to "smaller than the content" the background image gets it width from the body instead of the viewport. So the relation between content and image gets lost. Now I have this little JavaScript which does the trick, but this is of course some overhead I want to avoid: $(window).resize(function(){ img.css('left', (body.width() - img.width()) / 2 ); }); This works with a fixed positioned image, but can get a litty jumpy while calculating. I also tried things like that: <div id="test" style=" position: absolute; z-index: 0; top: 0; left: 0; width: 100%: height: 100%; background: transparent url(content/dummy/brand_backgroud_1600_1.jpg) no-repeat fixed center top; "></div> But this gets me back to my problem described. Is there any "script-less", elegant solution for this problem? UPDATE: now with Fiddle The one I'm trying to solve: http://jsfiddle.net/insertusernamehere/wPmrm/ The one with Javascript that works: http://jsfiddle.net/insertusernamehere/j5E8z/ NOTE The image size is always fixed. The image never gets scaled by the browser. In the JavaScript example it get's blown. So don't care about the size.

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • R selecting duplicate rows

    - by Matt
    Okay, I'm fairly new to R and I've tried to search the documentation for what I need to do but here is the problem. I have a data.frame called heeds.data in the following form (some columns omitted for simplicity) eval.num, eval.count, ... fitness, fitness.mean, green.h.0, green.v.0, offset.0, green.h.1, green.v.1,...green.h.7, green.v.7, offset.7... And I have selected a row meeting the following criteria: best.fitness <- min(heeds.data$fitness.mean[heeds.data$eval.count = 10]) best.row <- heeds.data[heeds.data$fitness.mean == best.fitness] Now, what I want are all of the other rows with that have columns green.h.0 to offset.7 (a contiguous section of columns) equal to the best.row Basically I'm looking for rows that have some of the conditions the same as the "best" row. I thought I could just do this, heeds.best <- heeds.data$fitness[ heeds.data$green.h.0 == best.row$green.h.0 & ... ] But with 24 columns it seems like a stupid method. Looking for something a bit simpler with less manual typing. Thanks!

    Read the article

  • Generic FSM for game in C++ ?

    - by Mr.Gando
    Hello, I was wondering if there's a way I could code some kind of "generic" FSM for a game with C++?. My game has a component oriented design, so I use a FSM Component. my Finite State Machine (FSM) Component, looks more or less this way. class gecFSM : public gecBehaviour { public: //Constructors gecFSM() { state = kEntityState_walk; } gecFSM(kEntityState s) { state = s; } //Interface void setRule(kEntityState initialState, int inputAction, kEntityState resultingState); void performAction(int action); private: kEntityState fsmTable[MAX_STATES][MAX_ACTIONS]; }; I would love to hear your opinions/ideas or suggestions about how to make this FSM component, generic. With generic I mean: 1) Creating the fsmTable from an xml file is really easy, I mean it's just a bunch of integers that can be loaded to create an MAX_STATESxMAX_ACTION matrix. void gecFSM::setRule(kEntityState initialState, int inputAction, kEntityState resultingState) { fsmTable[initialState][inputAction] = resultingState; } 2) But what about the "perform Action" method ? void gecFSM::performAction(int action) { switch( smTable[ ownerEntity->getState() ][ action ] ) { case WALK: /*Walk action*/ break; case STAND: /*Stand action*/ break; case JUMP: /*Jump action*/ break; case RUN: /*Run action*/ break; } } What if I wanted to make the "actions" and the switch generic? This in order to avoid creating a different FSM component class for each GameEntity? (gecFSMZombie, gecFSMDryad, gecFSMBoss etc ?). That would allow me to go "Data Driven" and construct my generic FSM's from files for example. What do you people suggest?

    Read the article

  • Is there a safe / standard way to manage unstructured memory in C++?

    - by andand
    I'm building a toy VM that requires a block of memory for storing and accessing data elements of different types and of different sizes. I've done this by writing a wrapper class around a uint8_t[] data block of the needed size. That class has some template methods to write / read typed data elements to / from arbitrary locations in the memory block, both of which check to make certain the bounds aren't violated. These methods use memmove in what I hope is a more or less safe manner. That said, while I am willing to press on in this direction, I've got to believe that other with more expertise have been here before and might be willing to share their wisdom. In particular: 1) Is there a class in one of the C++ standards (past, present, future) that has been defined to perform a function similar to what I have outlined above? 2) If not, is there a (preferably free as in beer) library out there that does? 3) Short of that, besides bounds checking and the inevitable issue of writing one type to a memory location and reading a different from that location, are there other issues I should be aware of? Thanks.-&&

    Read the article

  • Using AJAX to get a specific DOM Element (using Javascript, not jQuery)

    - by Matt Frost
    How do you use AJAX (in plain JavaScript, NOT jQuery) to get a page (same domain) and display just a specific DOM Element? (Such as the DOM element marked with the id of "bodyContent"). I'm working with MediaWiki 1.18, so my methods have to be a little less conventional (I know that a version of jQuery can be enabled for MediaWiki, but I don't have access to do that so I need to look at other options). I apologize if the code is messy, but there's a reason I need to build it this way. I'm mostly interested in seeing what can be done with the Javascript. Here's the HTML code: <div class="mediaWiki-AJAX"><a href="http://www.domain.com/whatever"></a></div> Here's the Javascript I have so far: var AJAXs = document.getElementsByClassName('mediaWiki-AJAX'); if (AJAXs.length > 0) { for (var i = AJAXs.length - 1; i >= 0; i--) { var URL = AJAXs[i].getElementsByTagName('a')[0].href; xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { AJAXs[i].innerHTML = xmlhttp.responseText; } } xmlhttp.open('GET',URL,true); xmlhttp.send(); } }

    Read the article

  • Timeout error occurred trying to start MySQL Daemon. CentOS 5

    - by epema
    I ran into troubles with MySQL on my CentOS. I had some problems and backed up my database and removed mysql with all dependencies. After that I ran reinstalled: yum groupinstall "MySQL Database" Installed without errors. Running the mysql daemon: service mysqld start Timeout error occurred trying to start MySQL Daemon. Starting MySQL: [FAILED] I also ran # /usr/bin/mysql_install_db --user=mysql Installing MySQL system tables... 120112 1:49:44 [ERROR] Error message file '/usr/share/mysql/english/errmsg.sys' had only 480 error messages, but it should contain at least 481 error messages. Check that the above file is the right version for this program! 120112 1:49:44 [ERROR] Aborting Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. You can try to start the mysqld daemon with: /usr/libexec/mysqld --skip-grant & and use the command line tool /usr/bin/mysql to connect to the mysql database and look at the grant tables: shell> /usr/bin/mysql -u root mysql mysql> show tables Try 'mysqld --help' if you have problems with paths. Using --log gives you a log in /var/lib/mysql that may be helpful. The latest information about MySQL is available on the web at http://www.mysql.com Please consult the MySQL manual section: 'Problems running mysql_install_db', and the manual section that describes problems on your OS. Another information source is the MySQL email archive. Please check all of the above before mailing us! And if you do mail us, you MUST use the /usr/bin/mysqlbug script! Checking the logs: less /var/log/mysqld.log Log file is empty. I don't even know how to debug it and not sure what to do. Any recommendations? Thank you

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • PHP Object References in Frameworks

    - by bigstylee
    Before I dive into the disscusion part a quick question; Is there a method to determine if a variable is a reference to another variable/object? For example $foo = 'Hello World'; $bar = &$foo; echo (is_reference($bar) ? 'Is reference' : 'Is orginal'; I have been using PHP5 for a few years now (personal use only) and I would say I am moderately reversed on the topic of Object Orientated implementation. However the concept of Model View Controller Framework is fairly new to me. I have looked a number of tutorials and looked at some of the open source frameworks (mainly CodeIgnitor) to get a better understanding how everything fits together. I am starting to appreciate the real benefits of using this type of structure. I am used to implementing object referencing in the following technique. class Foo{ public $var = 'Hello World!'; } class Bar{ public function __construct(){ global $Foo; echo $Foo->var; } } $Foo = new Foo; $Bar = new Bar; I was surprised to see that CodeIgnitor and Yii pass referencs of objects and can be accessed via the following method: $this->load->view('argument') The immediate advantage I can see is a lot less code and more user friendly. But I do wonder if it is more efficient as these frameworks are presumably optimised? Or simply to make the code more user friendly? This was an interesting article Do not use PHP references.

    Read the article

  • Why does a sub-class class of a class have to be static in order to initialize the sub-class in the

    - by Alex
    So, the question is more or less as I wrote. I understand that it's probably not clear at all so I'll give an example. I have class Tree and in it there is the class Node, and the empty constructor of Tree is written: public class RBTree { private RBNode head; public RBTree(RBNode head,RBTree leftT,RBTree rightT){ this.head=head; this.head.leftT.head.father = head; this.head.rightT.head.father = head; } public RBTree(RBNode head){ this(head,new RBTree(),new RBTree()); } public RBTree(){ this(new RBNode(),null,null); } public class RBNode{ private int value; private boolean isBlack; private RBNode father; private RBTree leftT; private RBTree rightT; } } Eclipse gives me the error: "No enclosing instance of type RBTree is available due to some intermediate constructor invocation" for the "new RBTree()" in the empty constructor. However, if I change the RBNode to be a static class, there is no problem. So why is it working when the class is static. BTW, I found an easy solution for the cunstructor: public RBTree(){ this.head = new RBNode(); } So, I have no idea what is the problem in the first piece of code.

    Read the article

  • Filtering on a left join in SQLalchemy

    - by Adam Ernst
    Using SQLalchemy I want to perform a left outer join and filter out rows that DO have a match in the joined table. I'm sending push notifications, so I have a Notification table. This means I also have a ExpiredDeviceId table to store device_ids that are no longer valid. (I don't want to just delete the affected notifications as the user might later re-install the app, at which point the notifications should resume according to Apple's docs.) CREATE TABLE Notification (device_id TEXT, time DATETIME); CREATE TABLE ExpiredDeviceId (device_id TEXT PRIMARY KEY, expiration_time DATETIME); Note: there may be multiple Notifications per device_id. There is no "Device" table for each device. So when doing SELECT FROM Notification I should filter accordingly. I can do it in SQL: SELECT * FROM Notification LEFT OUTER JOIN ExpiredDeviceId ON Notification.device_id = ExpiredDeviceId.device_id WHERE expiration_time == NULL But how can I do it in SQLalchemy? sess.query( Notification, ExpiredDeviceId ).outerjoin( (ExpiredDeviceId, Notification.device_id == ExpiredDeviceId.device_id) ).filter( ??? ) Alternately I could do this with a device_id NOT IN (SELECT device_id FROM ExpiredDeviceId) clause, but that seems way less efficient.

    Read the article

  • Security Exception when using Custom ASP.NET Healthmonitoring event in medium trust

    - by Elementenfresser
    Hi, I'm using custom healthmonitoring events in ASP.NET We recently moved to a new server with default High Trust Permissions. Literature says that healthmonitoring and custom events should work under Medium or higher trust (http://msdn.microsoft.com/en-us/library/bb398933.aspx). Problem is - it doesn't. In less than full trust I get a SecurityException saying The application attempted to perform an operation not allowed by the security policy It works in Full trust or when I remove the inheritance of System.Web.Management.WebErrorEvent. Any suggestions anyone? Here is the super simple code behind with a custom event defined: public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { CallCustomEvent(); } catch (Exception ex) { Response.Write(ex.Message); throw ex; } } /// <summary> /// this metho is never called due to lacking permissions... /// </summary> private void CallCustomEvent() { try { //do something useful here } catch (Exception) { //code to instantiate the forbidden inheritance... WebBaseEvent.Raise(new CustomEvent()); } } } /// <summary> /// custom error inheriting WebErrorEvent which is not allowed in high trust? can't believe that... /// </summary> public class CustomEvent : WebErrorEvent { public CustomEvent() : base("test", HttpContext.Current.Request, 100001, new ApplicationException("dummy")) { } } and the Web Config excerpt for high trust: <system.web> <trust level="High" originUrl="" />

    Read the article

  • why the main method are not covered? urgent, please help me

    - by Mike.Huang
    main method: public static void main(String[] args) throws Exception { if (args.length != EXPECTED_NUMBER_OF_ARGUMENTS) { System.err.println("Usage - java XFRCompiler ConfigXML PackageXML XFR"); } String configXML = args[0]; String packageXML = args[1]; String xfr = args[2]; AutoConfigCompiler compiler = new AutoConfigCompiler(); compiler.setConfigDocument(loadDocument(configXML)); compiler.setPackageInfoDoc(loadDocument(packageXML)); // compiler.setVisiblityDoc(loadDocument("VisibilityFilter.xml")); compiler.compileModel(xfr); } private static Document loadDocument(String fileName) throws Exception { TXDOMParser parser = (TXDOMParser) ParserFactory.makeParser(TXDOMParser.class.getName()); InputSource source = new InputSource(new FileInputStream(fileName)); parser.parse(source); return parser.getDocument(); } testcase: @Test public void testCompileModel() throws Exception { // construct parameters URL configFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Config.xml"); URL packageFile = Thread.currentThread().getContextClassLoader().getResource("Ford_2008_Mustang_Package.xml"); File tmpFile = new File("Ford_2008_Mustang_tmp.xfr"); if(!tmpFile.exists()) { tmpFile.createNewFile(); } String[] args = new String[]{configFile.getPath(),packageFile.getPath(),tmpFile.getPath()}; try { // test main method XFRCompiler.main(args); } catch (Exception e) { assertTrue(true); } try { // test args length is less than 3 XFRCompiler.main(new String[]{"",""}); } catch (Exception e) { assertTrue(true); } tmpFile.delete(); } coverage outputs displayed as the lines from “String configXML = args[0];" in main method are not covered

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • Dynamic Array traversal in PHP

    - by Kristoffer Bohmann
    I want to build a hierarchy from a one-dimensional array and can (almost) do so with a more or less hardcoded code. How can I make the code dynamic? Perhaps with while(isset($array[$key])) { ... }? Or, with an extra function? Like this: $out = my_extra_traverse_function($array,$key); function array_traverse($array,$key=NULL) { $out = (string) $key; $out = $array[$key] . "/" . $out; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; $key = $array[$key]; $out = $array[$key] ? $array[$key] . "/" . $out : ""; return $out; } $a = Array(102=>101, 103=>102, 105=>107, 109=>105, 111=>109, 104=>111); echo array_traverse($a,104); Output: 107/105/109/111/104

    Read the article

  • Upload using python script takes very long on one laptop as compared to another

    - by Engr Am
    I have a python 2.7 code which uses STORBINARY function for uploading files to an ftp server and RETRBINARY for downloading from this server. However, the issue is the upload is taking a very long time on three laptops from different brands as compared to a Dell laptop. The strange part is when I manually upload any file, it takes the same time on all the systems. The manual upload rate and upload rate with the python script is the same on the Dell Laptop. However, on every other brand of laptop (I have tried with IBM, Toshiba, Fujitsu-Siemens) the python script has a very low upload rate than the manual attempt. Also, on all these other laptops, the upload rate using the python script is the same (1Mbit/s) while the manual upload rate is approx. 8 Mbit/s. I have tried to vary the filesize for the upload to no avail. TCP Optimizer improved the download rate on all the systems but had no effect on the upload rate. Download rate using this script on all the systems is fine and same as the manual download rate. I have checked the server and it has more than 90% free space. The network connection is the same for all the laptops, and I try uploading only with one laptop at a time. All the laptops have almost the same system configurations, same operating system and approximately the same free drive space. If anything the Dell laptop is a little less in terms of processing power and RAM than 2 of the others, but I suppose this has no effect as I have checked many times to see how much was the CPU usage and network usage during these uploads and downloads, and I am sure that no other virus or program has been eating up my bandwidth. Here is the code ('ftp' and 'file_path' are inputs to the function): path,filename=os.path.split(file_path) filesize=os.path.getsize(file_path) deffilesize=(filesize/1024)/1024 f = open(file_path, "rb") upstart = time.clock() print ftp.storbinary("STOR "+filename, f) upende = time.clock()-upstart outname="Upload " f.close() return upende, deffilesize, outname

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • In CSS, how to not float a 300px wide Div to the next line?

    - by Jian Lin
    Say, there is a bar that is styled at the bottom of the viewport, using position: fixed; bottom: 0; left: 0; width: 100%; height 50px; overflow: hidden and then there are 4 Divs inside it, each one floated to the left. Each Div is about 300px wide or can be more (depending on the content) Now, when the window is 1200 pixel wide, and we see all 4 Divs, but when the window is resize to be 1180 pixel wide (just 20 pixels less), then the whole 300px wide Div will disappear, because it is "floated" to the next line. So how can this be made so that, the Div will stay there and showing 280px of itself, rather than totally disappear? By the way, white-space: nowrap won't work as that probably has to do with not wrapping inline content. I was thinking of putting another Div inside this Div, having a fixed width of 1200px or 2000px, so that all Divs will float on the same level in this inner Div, and the outer Div will cut it off with the overflow: hidden. But this seems more like a hack... since the wide of all those Divs can be dynamic, and setting a fixed width of 1200px or 2000px seems like too much of a hack.

    Read the article

  • Is it possible to "trick" PrintScreen, swap out the contents of my form with something else before c

    - by Lasse V. Karlsen
    I have a bit of a challenge. In an earlier version of our product, we had an error message window (last resort, unhandled exception) that showed the exception message, type, stack trace + various bits and pieces of information. This window was printscreen-friendly, in that if the user simply did a printscreen-capture, and emailed us the screenshot, we had almost everything we needed to start diagnosing the problem. However, the form was deemed too technical and "scary" for normal users, so it was toned down to a more friendly one, still showing the error message, but not the stack trace and some of the more gory details that I'd still like to get. In addition, the form was added the capabilities of emailing us a text file containing everything we had before + lots of other technical details as well, basically everything we need. However, users still use PrintScreen to capture the contents of the form and email that back to us, which means I now have a less than optimal amount of information to go on. So I was wondering. Would it be possible for me to pre-render a bitmap the same size as my form, with everything I need on it, detect that PrintScreen was hit and quickly swap out the form contents with my bitmap before capture, and then back again afterwards? And before you say "just educate the users", yes, that's not going to work. These are not out users, they're users at our customers place, so we really cannot tell them to wisen up all that much. Or, barring this, is there a way for me to detect PrintScreen, tell Windows to ignore it, and instead react to it, by dumping the aformentioned prerendered bitmap onto the clipboard ready for placing into an email? The code is C# 3.0 in .NET 3.5, if it matters, but pointers for something to look at/for is good enough.

    Read the article

  • Wrappers of primitive types in arraylist vs arrays

    - by ismail marmoush
    Hi, In "Core java 1" I've read CAUTION: An ArrayList is far less efficient than an int[] array because each value is separately wrapped inside an object. You would only want to use this construct for small collections when programmer convenience is more important than efficiency. But in my software I've already used Arraylist instead of normal arrays due to some requirements, though "The software is supposed to have high performance and after I've read the quoted text I started to panic!" one thing I can change is changing double variables to Double so as to prevent auto boxing and I don't know if that is worth it or not, in next sample algorithm public void multiply(final double val) { final int rows = getSize1(); final int cols = getSize2(); for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { this.get(i).set(j, this.get(i).get(j) * val); } } } My question is does changing double to Double makes a difference ? or that's a micro optimizing that won't affect anything ? keep in mind I might be using large matrices.2nd Should I consider redesigning the whole program again ?

    Read the article

  • how to get access to private members of nested class?

    - by macias
    Background: I have enclosed (parent) class E with nested class N with several instances of N in E. In the enclosed (parent) class I am doing some calculations and I am setting the values for each instance of nested class. Something like this: n1.field1 = ...; n1.field2 = ...; n1.field3 = ...; n2.field1 = ...; ... It is one big eval method (in parent class). My intention is -- since all calculations are in parent class (they cannot be done per nested instance because it would make code more complicated) -- make the setters only available to parent class and getters public. And now there is a problem: when I make the setters private, parent class cannot acces them when I make them public, everybody can change the values and C# does not have friend concept I cannot pass values in constructor because lazy evaluation mechanism is used (so the instances have to be created when referencing them -- I create all objects and the calculation is triggered on demand) I am stuck -- how to do this (limit access up to parent class, no more, no less)? I suspect I'll get answer-question first -- "but why you don't split the evaluation per each field" -- so I answer this by example: how do you calculate min and max value of a collection? In a fast way? The answer is -- in one pass. This is why I have one eval function which does calculations and sets all fields at once.

    Read the article

  • How to access elements in a complex list?

    - by Martin
    Hi, it's me and the lists again. I have a nice list, which looks like this: tmp = NULL t = NULL tmp$resultitem$count = "1057230" tmp$resultitem$status = "Ok" tmp$resultitem$menu = "PubMed" tmp$resultitem$dbname = "pubmed" t$resultitem$count = "305215" t$resultitem$status = "Ok" t$resultitem$menu = "PMC" t$resultitem$dbname = "pmc" tmp = c(tmp, t) t = NULL t$resultitem$count = "1" t$resultitem$status = "Ok" t$resultitem$menu = "Journals" t$resultitem$dbname = "journals" tmp = c(tmp, t) Which produces: str(tmp) List of 3 $ resultitem:List of 4 ..$ count : chr "1057230" ..$ status: chr "Ok" ..$ menu : chr "PubMed" ..$ dbname: chr "pubmed" $ resultitem:List of 4 ..$ count : chr "305215" ..$ status: chr "Ok" ..$ menu : chr "PMC" ..$ dbname: chr "pmc" $ resultitem:List of 4 ..$ count : chr "1" ..$ status: chr "Ok" ..$ menu : chr "Journals" ..$ dbname: chr "journals" Now I want to search through the elements of each "resultitem". I want to know the "dbname" for every database, that has less then 10 "count" (example). In this case it is very easy, as this list only has 3 elements, but the real list is a little bit longer. This could be simply done with a for loop. But is there a way to do this with some other function of R (like rapply)? My problem with those apply functions is, that they only look at one element. If I do a grep to get all "dbname" elements, I can not get the count of each element. rapply(tmp, function(x) paste("Content: ", x))[grep("dbname", names(rapply(tmp, c)))] Does someone has a better idea than a for loop? Thanx, Martin

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >