Search Results

Search found 22903 results on 917 pages for 'full length screenshots'.

Page 8/917 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Length of array (C) [migrated]

    - by physics
    I have an array of 10 elements. This array stocks some informations but last information is ended by an symbol "END" (which is -1). for exemple: array[0] = 1; array[1] = 1; array[2] = 1; array[3] = 1; array[4] = 1; array[5] = 1; array[6] = END; So in this case the length is 7 instead of 10. Here is my algorithm: for (i = 0; (i < 10) && (array[i] != END); i++) { ; } at the end i contains the length. Is it correct or there is an other methode?

    Read the article

  • Tool to annotate pictures (screenshots) for documentation purposes?

    - by René Nyffenegger
    Long time ago, I saw someone use a software (on Windows) that was specifically created to annotate pictures. It made it simple to add arrows, boxes, circles in "outstanding" colors to the image. Unfortunatly, I don't remember what program that was. Now, I have to document a GUI and I'd like to use this software in order to annotate screenshots of the software so that I can show the order of flow and dependencies between various aspects of the GUI. I'd be very happy if someone could point me into the right direction.

    Read the article

  • Monitoring remote employees with screenshots and screen sharing?

    - by Lulu
    I'm looking for a way to track and monitor my online employees I hire on Elance, oDesk and such. The tool should be able to: Produce screenshots on set intervals. Provide real-time screen sharing. Preferably track computer usage while user is logged in as "working" + state which task was done in that time. If I have no all-in-one solution, I will go with RealVNC for screen sharing. But I still need a recommendation for the other things. Thanks

    Read the article

  • How to find the length of a linked list that is having cycles in it?

    - by Bragaadeesh
    This was one of the interview questions asked. How to find the length of a linked list that is having cycle in it. I know how to calculate whether a linked list has a cycle or not using Hare and Tortoise technique. I even know how to calculate the length by storing the addresses in a hashset. But what I was not able to tell is how to calculate the length of the linked list without using a external space of O(n). Please help me. Thanks.

    Read the article

  • Duplicity Full Backup Lifetime and Efficiency

    - by Tim Lytle
    I'm trying to work up a backup strategy for some clients, and am leaning towards duplicity for remote backup (already use rdiff-backup for internal/on location backups). Is it reasonable to want a full backup every so often? Since duplicity increments forward, each incremental backup is relying on the previous increment, and all are relying heavily on the last full backup. Should that become corrupt, bad things happen. A related question: Does Duplicity test the incremental backups for consistency? Assuming I do want a full backup every so often, how efficiently does duplicity create that full backup? Can/does it check file signatures and copy unchanged data from previous full backups/increments? Basically creating a new 'full' archive transferring new/changed data and merging existing unchanged data? Right now my concern is that running a full backup is needed, but the consistent large bandwidth use of full backups will make this unreasonable for some clients.

    Read the article

  • Selenium screenshots using rspec

    - by Thomas Albright
    I am trying to capture screenshots on test failure using selenium-client and rspec. I run this command: $ spec my_spec.rb \ --require 'rubygems,selenium/rspec/reporting/selenium_test_report_formatter' \ --format=Selenium::RSpec::SeleniumTestReportFormatter:./report.html It creates the report correctly when everything passes, since no screenshots are required. However, when the test fails, I get this message, and the report has blank screenshots: WARNING: Could not capture HTML snapshot: execution expired WARNING: Could not capture page screenshot: execution expired WARNING: Could not capture system screenshot: execution expired Problem while capturing system stateexecution expired What is causing this 'execution expired' error? Am I missing something important in my spec? Here is the code for my_spec.rb: require 'rubygems' gem "rspec", "=1.2.8" gem "selenium-client" require "selenium/client" require "selenium/rspec/spec_helper" describe "Databases" do attr_reader :selenium_driver alias :page :selenium_driver before(:all) do @selenium_driver = Selenium::Client::Driver.new \ :host => "192.168.0.10", :port => 4444, :browser => "*firefox", :url => "http://192.168.0.11/", :timeout_in_seconds => 10 end before(:each) do @selenium_driver.start_new_browser_session end # The system capture need to happen BEFORE closing the Selenium session append_after(:each) do @selenium_driver.close_current_browser_session end it "backed up" do page.open "/SQLDBDetails.aspx page.click "btnBackup", :wait_for => :page page.text?("Pending Backup").should be_true end end

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • How to, set-justification-full per line in emacs without messing up on return?

    - by inaki
    Hi, I have two problems in emacs. First. How do I set-justification-full for the whole document? I can do M-X set-justification-full for a region successfully, but I would like to make it work in the whole document. Second. How do I manage not to get lines jumping from one place to another when I have done set-justification-full, and press enter? That is, say I have the following paragraph: %%if normalized beforehand then the rule would be, %%\begin{gather} %%(\hat{y}_{i}^{'} \times \hat{y}_{i+1}^{'}) \cdot \hat{z}_{mst} = 1, \quad then \ \Omega 1\\ %%(\hat{y}_{i}^{'} \times \hat{y}_{i+1}^{'}) \cdot \hat{z}_{mst} = %%-1,\quad then \ \Omega When I do set-justification-full, it will convert six lines into three lines, that is, what I want to do is a per line justification. Is this possible in emacs? Thank you all very much for your help. Inhaki2006

    Read the article

  • SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

    - by pinaldave
    In the last few days during this series, I got many question about this Wait type. It would be great if you read my original related wait stats query in the first post because I have filtered it out in WHERE clause. However, I still get questions about this being one of the most wait types they encounter. The truth is, this is a background task processing and it really does not matter and it should be filtered out. There are many new Wait types related to Full Text Search that are introduced in SQL Server 2008. If you run the following query, you will be able to find them in the list. Currently there is not enough information for all of them available on BOL or any other place. But don’t worry; I will write an in-depth article when I learn more about them. SELECT * FROM sys.dm_os_wait_stats WHERE wait_type LIKE 'FT_%' The result set will contain following rows. FT_RESTART_CRAWL FT_METADATA_MUTEX FT_IFTSHC_MUTEX FT_IFTSISM_MUTEX FT_IFTS_RWLOCK FT_COMPROWSET_RWLOCK FT_MASTER_MERGE FT_IFTS_SCHEDULER_IDLE_WAIT We have understood so far that there is not much information available. But the problem is when you have this Wait type, what should you do?  The answer is to filter them out for the moment (i.e, do not pay attention on them) and focus on other pressing issues in wait stats or performance tuning. Here are two of my informal suggestions, which are totally independent from wait stats: Turn off the Full Text Search service in your system if you are  not necessarily using it on your server. Learn proper Full Text Search methodology. You can get Michael Coles’ book: Pro Full-Text Search in SQL Server 2008. Now I invite you to speak out your suggestions or any input regarding Full Text-related best practices and wait stats issue. Please leave a comment. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Content Length and Transfer Encoding Chunked nginx, node-http-proxy

    - by rampr
    I have the following setup - node-http-proxy acts as a reverse proxy forwarding all requests to nginx/socket.io as necessary My problem is this When I send a HTTP DELETE request from the browser, node-http-proxy adds a header "Transfer Encoding Chunked" as the request from the browser had no Content Length. The request from the browser had no Content Length as it had no body. Nginx doesn't like the Transfer Encoding Chunked Header and throws a 411 asking for Content-Length. The problem gets solved when I send dummy data as part of the DELETE request so there is a Content Length and node-http-proxy doesn't add Transfer Encoding Chunked header and nginx is happy. I want to understand if node-http-proxy isn't working as expected, because it adds a Transfer Encoding Chunked header when Content Length is missing because there is no Content Body.

    Read the article

  • Windows domain full hostnames cannot be resolved resulting in intranet not working

    - by OpethR
    the domain: is foo.bar.local full hostname is: bla.foo.bar.local short hostname is: bla I installed winbind. here is my smb.conf: name resolve order = lmhosts host wins bcast here is my nsswitch.conf: hosts: files mdns4_minimal [NOTFOUND=return] dns wins mdns4 when I try to ping full hostname, I get: "ping: unknown host" when I ping short hostname it works and shows me PING bla.foo.bar.local (10.11.20.135) 56(84) bytes of data. 64 bytes from bla.foo.bar.local (10.11.20.135): icmp_req=1 ttl=62 time=49.7 ms *notice that it manages to get the full hostname!? :S now the only reason I need it is cuz I'm trying to reach intranet websites. when I type short hostname "bla" in firefox address bar, it automatically changes it to the full hostname (which is good, right?!) but then it says: Server not found Firefox can't find the server at bla.foo.bar.local. what am I doing wrong? it's driving me nutz. so if you are wandering then yes, it is company intranet I'm trying to reach from ubuntu. If I use my crappy winxp everything is working perfectly well.

    Read the article

  • Ubuntu 12.04 Full Screen without Unity Hub

    - by Xatolos
    I couldn't find an answer for this, so I thought I would try here. My problem is, I have some games on Wine that play in full screen mode but I can't really get them in full screen mode. When they start up, they end up with the GUI's "HUD" overlapping the game which is really annoying and well kills the option to play the game. This has also happened in a few of my Linux programs as well. While I wrote its a problem with Unity, it really isn't limited to this though. I've done full screen mode on Unity and had the Unity side bar and top menu overlap, I've used Gnome and had the same issue happen, Gnome menu is in the top, Cinnamon had the same issue. Sometimes if I Alt-Tab out of it and back into the program it MIGHT remove the HUD but that doesn't always work, and lets face it, full screen mode that is killed by the GUI just isn't something that can really be ignored. Any suggestions or help would be highly welcomed. My laptop http://www.cnet.com/laptops/asus-g53jw-a1-15/4507-3121_7-34210244.html edit Seems so far to be an issue with Unity and Gnome being in 3D mode, and possibly an issue with my NVidia card and XServe (I think that was it...). Going to Unity/Gnome 2d modes seem to help with this for the moment...

    Read the article

  • OBIEE 11.1.1 - OBIEE 11g Full Sample App on VMware Player 4

    - by user809526
    The Full Sample App is designed to run on Virtual Box. Let's describe how to run it on VMware Player 4. Open Virtualization Format Tool http://communities.vmware.com/community/vmtn/server/vsphere/automationtools/ovf VMware Player Documentation https://www.vmware.com/support/pubs/player_pubs.html Full Sample App Deployment Guide sampleapp107-vbimage-deployguide-453583.pdf INSTALL VMplayer 4.0.0 as root LINUX # sh VMware-Player-4.0.0-471780.x86_64.bundle (A new VM is not needed and can be deleted later after that installation is completed. "I will install OS later" - blank hard disk Guest: linux, Red Hat Enterprise Linux 5-64bits => rename to RHEL target: eg /a/root/vmware/ Max disk size: 5 GB (will be deleted) Disk: Single file Dummy RHEL.vmk, RHEL.vmdk is generated. "Delete VM from Disk" in VM Player.) Copy Full Sample App files to target /a/root/vmware/ WARNING: Select a target eg /a/root/vmware/ with lots of free space, 95 GB. Check checksums (md5sum). Please do it! ff85c7eacf7fb8c382e98da875e879e1  Sampleapp_v107_GA-disk1.vmdk 973258cb3c7d64ab03ae853278cf2233  Sampleapp_v107_GA-disk2.vmdk e576be16e36d810479736bfb15d050f5  Sampleapp_v107_GA-disk3.vmdk 3455df77279e53e07d5fee6712f1597d  Sampleapp_v107_GA-disk4.vmdk OVF FILE   Sampleapp_v107_GA.ovf CONVERSION $ cd /a/root/vmware/ LINUX $ /usr/bin/ovftool -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA/Sampleapp_v107_GA.ovf Disk Transfer Completed                   Completed successfully WINDOWS CYGWIN $ /cygdrive/c/VMwarePlayer/OVFTool/ovftool.exe -tt=ovf --compress=1 -dm=monolithicSparse Sampleapp_v107_GA.ovf .  [dot] Opening OVF source: Sampleapp_v107_GA.ovf Warning: No manifest file Opening OVF target: . Writing OVF package: Sampleapp_v107_GA\Sampleapp_v107_GA.ovf Disk Transfer Completed Completed successfully /a/root/vmware$ du -sk 49095328    .   [50 GB already occupied] IMPORT - First start of VM Player 4: /usr/bin/vmplayer "Open a Virtual Machine" Browse to /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf [the new generated .ovf] "Import Virtual Machine" dialog Name: Sampleapp_v107_GA Location: /a/root/vmware/Sampleapp_v107_GA/storage [was /home/tdubois/vmware/Sampleapp_v107_GA] "Import" "The import failed because /a/root/vmware/Sampleapp_v107_GA/Sampleapp_v107_GA.ovf did not pass OVF specification conformance or virtual hardware compliance checks. Click Retry to relax OVF specification..." "Retry" ; Long import /a/root/vmware/Sampleapp_v107_GA/storage/Sampleapp_v107_GA.vmx and new .vmdk files are created. /a/root/vmware$ du -sk 95551384    .   [95 GB occupied] Full Sample App GUEST SETUP "Edit VM settings" min 3GB, 2+ processors, network bridged. For OBIEE + Essbase testing use 8 GB RAM hardware. At first time lauch of Full Sample App, leave OEL booting for several minutes undisturbed. Problem with X display server may occur [/usr/bin/Xorg ; man Xorg]. "Failed to start the X server.... Would you like to view the X server output to diagnose the problem?" "No" [tab key] "Would you like to try to configure the X server? Note that you will need the root password for this." "Yes" [oracle] X Display Settings 800x600 saved in /etc/X11/xorg.conf "Trying to restart the X server" Login as root/oracle in guest OEL. In guest OEL, Virtual Machine > Install VMware Tools... Extract archive VMwareTools-8.8.0-471268.tar.gz all files in writable local directory eg /root In Terminal run Perl script # cd /root/vmware-tools-distrib ; ./vmware-install.pl [keep all default answers] Set keyboard layout System > Preferences > Keyboard > Layouts Restart X server eg System > Log Out root... , relogin Modify X resolution System > Preferences > Screen Resolution Full Sample App OEL login: oracle/oracle ; root/oracle [default US keyboard layout] Credentials are described in the 'sampleapp107-vbimage-deployguide-453583.pdf' The large files in /a/root/vmware/ /a/root/vmware/Sampleapp_v107_GA/ may be removed. FAILURE REMARK: Adding the 4 original Sampleapp_v107_GA-disks[1234].vmdk to VM Player does NOT work as described below. "Edit VM settings" "Remove" "Hard Disk" "Edit VM settings" "Add" "Hard Disk" "Next" "Use an existing virtual disk" "Browse" "Finish" "Keep existing format" "Ok" for each 4 disks settings one by one. Start VM Player 4. "You do not have write access to a partition" Allow all Sampleapp_v107 OEL linux launches. OEL stalls silently after 'Checking filesystems'.

    Read the article

  • Using XNA ContentPipeline to export a file in a machine without full XNA GS

    - by krolth
    My game uses the Content Pipeline to load the spriteSheet at runtime. The artist for the game sends me the modified spritesheet and I do a build in my machine and send him an updated project. So I'm looking for a way to generate the xnb files in his machine (this is the output of the content pipeline) without him having to install the full XNA Game studio. 1) I don't want my artist to install VS + Xna (I know there is a free version of VS but this won't scale once we add more people to the team). 2) I'm not interested in running this editor/tool in Xbox so a Windows only solution works. 3) I'm aware of MSBuild options but they require full XNA I researched Shawn's blog and found the option of using Msbuild Sample or a new option in XNA 4.0 that looked promising here but seems like it has the same restriction: Need to install full XNA GS because the ContentPipeline is not part of the XNA redist. So has anyone found a workaround for this?

    Read the article

  • New Java ME security app, Rapid Tracker, is now full version

    - by hinkmond
    Rapid Protect has updated it's Java ME security app to be the full version now instead of a dumbed down version that ran on feature phones. Now, that's progress! See: Full Rapid Tracker on Java ME Here's a quote: Rapid Protect, a leading company focused on mobile based safety, security and collaboration space announces major feature enhancements to its award winning "Rapid Tracker" mobile applications. In addition to many new features, it announced availability of Full Rapid Tracker application on J2ME non-smart feature phones. Hmmm... "on J2ME non-smart feature phones". I wonder if by "non-smart" they mean another word... Perhaps, "non-iDrone-Anphoid"? Hinkmond

    Read the article

  • Documenting applications - automation / semi-automation for screenshots?

    - by bguiz
    For me one of the biggest bores of being a developer is writing user documentation. (I am referring to the stuff that gets exported into PDF files files that ship with the product, not comments in code here). The task off adding or updating new bits of text to the existing documentation is OK. However having to take screenshots of select screens can be quite a tedious process. Is there a way to automate or even semi-automate the process of taking screenshots? The main requirement is the ability to crop images such that they contain only the window, including window manager areas (such as the title bar). The secondary requirement is that any format is OK, so long as it can be exported to PDF. EDIT: Any more biters?

    Read the article

  • Shorten array length once element is remove in Java.

    - by lupin
    Note: Following is my homework/assignment, feel free not to answer if you will. I want to delete/remove an element from an String array(Set) basic, I'm not allowed to use Collections..etc. Now I have this: void remove(String newValue) { for ( int i = 0; i < setElements.length; i++) { if ( setElements[i] == newValue ) { setElements[i] = ""; } } } I does what I want as it remove the element from an array but it doesn't shorten the length. The following is the output, basically it remove the element indexed #1. D:\javaprojects>java SetsDemo Enter string element to be added A You entered A Set size is: 5 Member elements on index: 0 A Member elements on index: 1 b Member elements on index: 2 hello Member elements on index: 3 world Member elements on index: 4 six Set size is: 5 Member elements on index: 0 A Member elements on index: 1 Member elements on index: 2 hello Member elements on index: 3 world Member elements on index: 4 six lupin

    Read the article

  • Automatic website screenshots for Web app?

    - by amfeng
    Is there a way to take automatic screenshots of a web page (by specifying its URL) in a web app using PHP or Ruby on Rails? Perhaps using a plugin or some external REST service. I've researched a lot, and nothing seems to fit except something like this (http://www.binarymoon.co.uk/2010/02/automated-take-screenshots-website-free/) but I doubt Wordpress would just let me spam their servers for something not Wordpress related. I'd like to use it for my own web application so I'm not sure of the legal implications..how hard is this to implement myself? What does it entail? Thanks!

    Read the article

  • All possible combinations of length 8 in a 2d array

    - by CodeJunki
    Hi, I've been trying to solve a problem in combinations. I have a matrix 6X6 i'm trying to find all combinations of length 8 in the matrix. I have to move from neighbor to neighbor form each row,column position and i wrote a recursive program which generates the combination but the problem is it generates a lot of duplicates as well and hence is inefficient. I would like to know how could i eliminate calculating duplicates and save time. int a={{1,2,3,4,5,6}, {8,9,1,2,3,4}, {5,6,7,8,9,1}, {2,3,4,5,6,7}, {8,9,1,2,3,4}, {5,6,7,8,9,1}, } void genSeq(int row,int col,int length,int combi) { if(length==8) { printf("%d\n",combi); return; } combi = (combi * 10) + a[row][col]; if((row-1)>=0) genSeq(row-1,col,length+1,combi); if((col-1)>=0) genSeq(row,col-1,length+1,combi); if((row+1)<6) genSeq(row+1,col,length+1,combi); if((col+1)<6) genSeq(row,col+1,length+1,combi); if((row+1)<6&&(col+1)<6) genSeq(row+1,col+1,length+1,combi); if((row-1)>=0&&(col+1)<6) genSeq(row-1,col+1,length+1,combi); if((row+1)<6&&(row-1)>=0) genSeq(row+1,col-1,length+1,combi); if((row-1)>=0&&(col-1)>=0) genSeq(row-1,col-1,length+1,combi); } I was also thinking of writing a dynamic program basically recursion with memorization. Is it a better choice?? if yes than I'm not clear how to implement it in recursion. Have i really hit a dead end with approach??? Thankyou Edit Eg result 12121212,12121218,12121219,12121211,12121213. the restrictions are that you have to move to your neighbor from any point, you have to start for each position in the matrix i.e each row,col. you can move one step at a time, i.e right, left, up, down and the both diagonal positions. Check the if conditions. i.e if your in (0,0) you can move to either (1,0) or (1,1) or (0,1) i.e three neighbors. if your in (2,2) you can move to eight neighbors. so on...

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >