Search Results

Search found 1697 results on 68 pages for 'clone'.

Page 63/68 | < Previous Page | 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Make a Drive Image Using an Ubuntu Live CD

    - by Trevor Bekolay
    Cloning a hard drive is useful, but what if you have to make several copies, or you just want to make a complete backup of a hard drive? Drive images let you put everything, and we mean everything, from your hard drive in one big file. With an Ubuntu Live CD, this is a simple process – the versatile tool dd can do this for us right out of the box. We’ve used dd to clone a hard drive before. Making a drive image is very similar, except instead of copying data from one hard drive to another, we copy from a hard drive to a file. Drive images are more flexible, as you can do what you please with the data once you’ve pulled it off the source drive. Your drive image is going to be a big file, depending on the size of your source drive – dd will copy every bit of it, even if there’s only one tiny file stored on the whole hard drive. So, to start, make sure you have a device connected to your computer that will be large enough to hold the drive image. Some ideas for places to store the drive image, and how to connect to them in an Ubuntu Live CD, can be found at this previous Live CD article. In this article, we’re going to make an image of a 1GB drive, and store it on another hard drive in the same PC. Note: always be cautious when using dd, as it’s very easy to completely wipe out a drive, as we will show later in this article. Creating a Drive Image Boot up into the Ubuntu Live CD environment. Since we’re going to store the drive image on a local hard drive, we first have to mount it. Click on Places and then the location that you want to store the image on – in our case, a 136GB internal drive. Open a terminal window (Applications > Accessories > Terminal) and navigate to the newly mounted drive. All mounted drives should be in /media, so we’ll use the command cd /media and then type the first few letters of our difficult-to-type drive, press tab to auto-complete the name, and switch to that directory. If you wish to place the drive image in a specific folder, then navigate to it now. We’ll just place our drive image in the root of our mounted drive. The next step is to determine the identifier for the drive you want to make an image of. In the terminal window, type in the command sudo fdisk -l Our 1GB drive is /dev/sda, so we make a note of that. Now we’ll use dd to make the image. The invocation is sudo dd if=/dev/sda of=./OldHD.img This means that we want to copy from the input file (“if”) /dev/sda (our source drive) to the output file (“of”) OldHD.img, which is located in the current working directory (that’s the “.” portion of the “of” string). It takes some time, but our image has been created…Let’s test to make sure it works. Drive Image Testing: Wiping the Drive Another interesting thing that dd can do is totally wipe out the data on a drive (a process we’ve covered before). The command for that is sudo dd if=/dev/urandom of=/dev/sda This takes some random data as input, and outputs it to our drive, /dev/sda. If we examine the drive now using sudo fdisk –l, we can see that the drive is, indeed, wiped. Drive Image Testing: Restoring the Drive Image We can restore our drive image with a call to dd that’s very similar to how we created the image. The only difference is that the image is going to be out input file, and the drive now our output file. The exact invocation is sudo dd if=./OldHD.img of=/dev/sda It takes a while, but when it’s finished, we can confirm with sudo fdisk –l that our drive is back to the way it used to be! Conclusion There are a lots of reasons to create a drive image, with backup being the most obvious. Fortunately, with dd creating a drive image only takes one line in a terminal window – if you’ve got an Ubuntu Live CD handy! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayHow to Browse Without a Trace with an Ubuntu Live CDWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy WayClone a Hard Drive Using an Ubuntu Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries Rent Cameras In Bulk At CameraRenter Download Songs From MySpace

    Read the article

  • BI&EPM in Focus June 2014

    - by Mike.Hallett(at)Oracle-BI&EPM
    Applications Webcast Centre – A Library of Discussion and Research for Best Practice: Achieving Reliable Planning, Budgeting and Forecasting Talent Analytics and Big Data – Is HR ready for the challenge Enterprise Data – The cost of non-quality Customers Josephine Niemiec from ADP talks about Oracle Hyperion Workforce Planning at Collaborate 2014 (link) Video Chris Nelms from Ameren talks about Oracle BI Spend and Procurement Analytics at Collaborate 2014 (link) Video Leggett & Platt Leverages Oracle Hyperion EPM and Demantra (link) Video Pella Corporation Accelerates Close Cycle by Cutting Time for Financial Consolidation from Three Days to Less Than One Day (link) Secretaría General de Administración de Justicia en España Enhances Citizen Services with Near-Real-Time Business Intelligence Gleaned from 500 Databases  (link) Bellco Credit Union Speeds Budget Development by 30%—Gains Insight into Specific Branch and Financial Product Profitability  (link)  Video QDQ media Speeds up Financial Reporting by 24x, Gains Business Agility, and Integrates Seamlessly into Corporate Accounting System  (link) Westfield Group Maximizes Shopping Mall Revenue, Shortens Year-End Financial Consolidation by 75%  (link)  IL&FS Transportation Networks Shortens Financial Consolidation and Reporting Cycle by Eight Days, Gains In-Depth Insight into Business Performance   (link) Angel Trains Optimizes Rail Operations for Purchasing, Sourcing, and Project Management to Meet Challenges of Evolving Rail Industry  (link) Enterprise Performance Management June 11, at Oracle Utrecht, NL: Morning session: Explore Planning and Budgeting in the Cloud (link) June 12, London: PureApps Presents: Best Practice Financial Consolidation and Reporting Workshop (link) July 3, Koln: Oracle Hyperion Business Analytics Roundtable (link) Blog: What's Your Tax Strategy? Automate the Operational Transfer Pricing Process (link) YouTube Video: Automate Tax Reporting with Oracle Hyperion Tax Provision (link) YouTube Video: Introducing Oracle Hyperion Planning’s Tablet Optimized Interface (link) OracleEPMWebcasts @ YouTube (link) Partner webcasts: Wednesday, 4 June, 5.00 GMT - Case Study:  Lessons Learned from Edgewater Ranzal's Internal Implementation of Oracle Planning & Budgeting Cloud Service (PBCS) - Learn more and register here! Thursday, 5 June, 4.00 GMT - Achieving Accountable Care Using Oracle Technology - Learn more and register here! Tuesday, 17 June, 4.00 GMT - Optimizing Performance for Oracle EPM Systems - Learn more and register here! Oracle University Blog: The Coolest Features Available with Oracle Hyperion 11.1.2.3 – Training from OU to help you to best use them (link) Support: Proactive Support: EPM Hyperion Planning 11.1.2.3.500 Using RMI Service [Blog] Proactive Support: Planning and Budgeting Cloud Service Videos (link) Planning and Budgeting Cloud Service (PBCS) 11.1.2.3.410 Patch Bundle [Doc ID 1670981.1] Hyperion Analytic Provider Services 11.1.2.2.106 Patch Set Update [Doc ID 1667350.1] Hyperion Essbase 11.1.2.2.106 Patch Set Update [Doc ID 1667346.1] Hyperion Essbase Administration Services 11.1.2.2.106 Patch Set Update [Doc ID 1667348.1] Hyperion Essbase Studio 11.1.2.2.106 Patch Set Update [Doc ID 1667329.1] Hyperion Smart View 11.1.2.5.210 Patch Set Update [Doc ID 1669427.1] Using HPCM, HSF or DRM Communities (link) Business Intelligence June 12, Birmingham, UK: Oracle Big Data at Work - Use Cases and Architecture (link) June 17, London: Oracle at Cloud & Big Data World Forums (link) June 17, Partner Webcast: Transform your Planning Capabilities with Peloton's CloudAccelerator for Oracle PBCS (link) June 19, London: Oracle at the Whitehall Media Big Data Analytics Conference and Exhibition (link) June 19, London: Partner Event - Agile BI Conference by Peak Indicators [link] June 25, Munich: Oracle Special Day auf der TDWI 2014 Konferenz (link) July 15, London: Oracle Endeca Information Discovery Workshop (link) July 16, London: BI Applications Workshop – Financial Analytics & Procurement Analytics (link) July 17, London: BI Applications Workshop – HR Analytics (link) Milan, Italy: L’Osservatorio Big Data Analytics & Business Intelligence with Politecnico di Milano (link) OBIA 11.1.1.8.1 - Now Available [Blog] What’s New in OBIA 11.1.1.8.1 [Blog] BI Blog: A closer look at Oracle BI Applications 11.1.1.8.1 release (link) Press Release: BI Applications Deliver Greater Insight into Talent and Procurement (link) Support Blog: OBIA 11.1.1.8.1 Upgrade Guide & Documentation (link) YouTube Video: Glenn Hoormann of Ludus talks to us about Oracle Business Intelligence and ERP at Collaborate 2014 (Link) YouTube Video: Performance Architects talks about key BI and Mobile trends, including Endeca at Collaborate 2014 (link) Big Data Blog: 3 Keys for Using Big Data Effectively for Enhanced Customer Experience (link) Big Data Lite Demo VM 3.0 Now Available on OTN BI Blog: Data Relationship Governance - Workflow in a Bottle (link) MDM Blog: Register for Product Data Management Weekly Cloudcasts (link) MDM Blog: Improve your Customer Experience with High Quality Information (link) MDM Blog: Big Data Challenges & Considerations (link) Oracle University: Oracle BI Applications 11g: Implementation using ODI (link) Proactive Support: Monthly Index [Blog] My Oracle Support: Partner Accreditation for Business Analytics Support [Blog] OBIEE 11g Test-to-Production (T2P) / Clone Procedures Guide [Blog] Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Invalid Opcode 0000

    - by Mr47
    At random times (usually when watching a movie in XBMC), the computer locks up. I can still sometimes SSH in and get the 'dmesg' output before that locks up too. A hard reboot is usually required to get things going again. I have cut out the date/time/server columns for easier reading, please do ask if these seem relevant omissions... System: Ubuntu 11.04 (2.6.38-8-server) x64 X11 installed with IceWm (and XBMC) Core 2 Duo E8400 @ 3.00GHz 8 GB RAM Asus P5Q premium motherboard Primary harddrive: OCZ Vertex 2 60 GB (SSD) Other harddrives: various 750GB, 1TB, 1.5TB & 2TB (WD & samsung) Any important information I am not supplying is purely a sign of my incompetence in these matters, so please do ask and excuse me for my inabilities... invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map CPU 0 Modules linked in: parport_pc ppdev vesafb snd_hda_codec_analog tuner_simple tuner_types wm8775 tda9887 tda8290 tea5767 tuner cx25840 ir_lirc_codec lirc_dev snd_hda_intel snd_hda_codec snd_hwdep snd_pcm ir_sony_decoder snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq rc_rc6_mce ivtv ir_jvc_decoder cx2341x i2c_algo_bit v4l2_common mceusb videodev ir_rc6_decoder ir_rc5_decoder snd_timer ir_nec_decoder nvidia(P) btusb bluetooth rc_core v4l2_compat_ioctl32 tveeprom snd_seq_device pata_marvell psmouse shpchp serio_raw snd asus_atk0110 soundcore snd_page_alloc lp parport firewire_ohci firewire_core crc_itu_t r8169 sky2 ahci libahci Pid: 4597, comm: xbmc.bin Tainted: P 2.6.38-8-server #42-Ubuntu System manufacturer P5Q Premium/P5Q Premium RIP: 0010:[<ffffffff8119bc4a>] [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP: 0018:ffff88021f5a59d8 EFLAGS: 00210246 RAX: 0000000000000020 RBX: ffff88021f5a5ac8 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 0000000015e36fe0 RDI: 0000000000000000 RBP: ffff88021f5a5a98 R08: ffff88021f5a5ac8 R09: ffff88021f5a5b38 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 000000000000b148 R14: 0000000000000001 R15: ffff8802067034b8 FS: 00007f3f34eb1700(0000) GS:ffff8800cfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007feb8d515000 CR3: 000000021d744000 CR4: 00000000000406f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process xbmc.bin (pid: 4597, threadinfo ffff88021f5a4000, task ffff88021f63db80) Stack: ffff88021f5a5a28 ffffffff8116019d ffff88021f5a5b40 0000002000000003 ffff88021f5a5b38 ffff880206703370 ffffea0006d800f8 0000000000000000 ffffea0006d800f8 0000000c811270b5 ffff88021f5a5a68 ffffffff8110bdba Call Trace: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff81149475>] ? alloc_pages_current+0xa5/0x110 [<ffffffff8120157d>] ext4_readpages+0x1d/0x20 [<ffffffff81116a9b>] __do_page_cache_readahead+0x14b/0x220 [<ffffffff81116ed1>] ra_submit+0x21/0x30 [<ffffffff81116ff5>] ondemand_readahead+0x115/0x230 [<ffffffff811171a0>] page_cache_async_readahead+0x90/0xc0 [<ffffffff8110b184>] ? file_read_actor+0xd4/0x170 [<ffffffff812de72e>] ? radix_tree_lookup_slot+0xe/0x10 [<ffffffff8110c521>] do_generic_file_read.clone.23+0x271/0x450 [<ffffffff8110d1ba>] generic_file_aio_read+0x1ca/0x240 [<ffffffff8100a82e>] ? __switch_to+0x20e/0x2f0 [<ffffffff81164c82>] do_sync_read+0xd2/0x110 [<ffffffff8108b61c>] ? hrtimer_try_to_cancel+0x4c/0xe0 [<ffffffff81279083>] ? security_file_permission+0x93/0xb0 [<ffffffff81164fa1>] ? rw_verify_area+0x61/0xf0 [<ffffffff81165463>] vfs_read+0xc3/0x180 [<ffffffff81165571>] sys_read+0x51/0x90 [<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b Code: ff ff 48 c7 85 78 ff ff ff 00 00 00 00 49 d3 ee b9 0c 00 00 00 2b 4d 8c 48 8b b2 c8 00 00 00 ba 01 00 00 00 41 0f af c6 49 d3 e5 <0f> 36 4d 8c 4c 01 e8 d3 e2 4c 8d 44 16 ff 48 8b 53 20 49 d3 f8 RIP [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP <ffff88021f5a59d8> ---[ end trace ac6cd2f4692205a3 ]--- Please note that the error is ALWAYS occuring at do_mpage_readpage+0x9a/0x510 with the same numbers after it. I've tried to come up with the possible meaning of these, but couldn't get any further. I've also noticed that the top block from the call trace is always the following with the exact same numbers: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 Could this indicate a hard drive issue, a RAM issue or something else entirely?

    Read the article

  • Converting Creole to HTML, PDF, DOCX, ..

    - by Marko Apfel
    Challenge We documented a project on Github with the Wiki there. For most articles we used Creole as markup language. Now we have to deliver a lot of the content to our client in an usual format like PDF or DOCX. So we need a automatism to extract all relevant content, merge it together and convert the stuff to a new format. Problem One of the most popular toolsets to convert between several formats is Pandoc. But unfortunally Pandoc does not support Creole (see the converting matrix). Approach So we need an intermediate step: Converting from Creole to a supported Pandoc format. Creolo/c is a Creole to Html converter and does exactly what we need. After converting our Creole content to Html we could use Pandoc for all the subsequent tasks. Solution Getting the Creole stuff First at all we need the Creole content on our locale machines. This is easy. Because the Github Wiki themselves is a Git repository we could clone it to our machine. In the working copy we see now all the files and the suffix gives us the hint for the markup language. Converting and Merging Creole content to Html Because we would like all content from several Creole files in one HTML file, we have to convert and merge all the input files to one output file. Creole/c has an option (-b) to generate only the Html-stuff below a Html <Body>-tag. And this is hook for us to start. We have to create manually the additional preluding Html-tags (<html>, <head>, ..), then we merge all needed Creole content to our output file and last we add the closing tags. This could be done straightforward with a little bit old DOS magic: REM === Generate the intro tags === ECHO ^<html^> > %TMP%\output.html ECHO ^<head^> >> %TMP%\output.html ECHO ^<meta name="generator" content="creole/c"^> >> %TMP%\output.html ECHO ^</head^> >> %TMP%\output.html ECHO ^<body^> >> %TMP%\output.html REM === Mix in all interesting Creole stuff with creole/c === .\Creole-C\bin\creole.exe -b .\..\datamodel+overview.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdCaptureMode.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdDamageReducingActivity.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+lookup+IncidentDamageCodes.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+Attachments.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+TrafficLights.creole >> %TMP%\output.html REM === Generate the outro tags === ECHO ^</body^> >> %TMP%\output.html ECHO ^</html^> >> %TMP%\output.html REM === Convert the Html file to Docx with Pandoc === .\Pandoc\bin\pandoc.exe -o .\Database-Schema.docx %TMP%\output.html Some explanation for this The first ECHO call creates the file. Therefore the beginning <html> tag is send via > to a temporary working file. All following calls add content to the existing file via >>. The tag-characters < and > must be escaped. This is done by the caret sign (^). We use a file in the default temporary folder (%TMP%) to avoid writing in our current folders. (better for continuous integration) Both toolsets (Creole/c and Pandoc) are copied to a versioned tools folder in the Wiki. This is committable and no problem after pushing – Github does not do anything with it. In this folder is also the batch (Export-Docx.bat) for all the steps. Pandoc recognizes the conversion by the suffixes of the file names. So it is enough to specify only the input and output files.

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

  • obiee 10g teradata Solaris deployment

    - by user554629
    I have 3-4 years worth of notes on proper Teradata deployment across multiple operating systems.   The topic that is too large to cover succinctly in a blog entry.   I'm trying something new:  document a specific situation, consolidate the facts, document diagnostic procedures and then clone the structure to cover other obiee deployments (11g and other operating systems). Until the icon below is removed, this blog entry may be revised frequently.  No construction between June 6th through June 25th. Getting started obiee 10g certification:  pg 24-25 Teradata V2R5.1.x - V2R6.2, Client 13.10, certified 10.1.3.4.1obiee 10g documentation: Deployment Guide, Server Administration, Install/Config Guideobiee overview: teradata connectivity downloads: ( requires registration )solaris odbc drivers: sparc 13.10:  Choose 13.10.00.04  ( ReadMe ) sparc 14.00: probably would work, but not certified by Oracle on 10g I assume you have obiee 10.1.3.4.1 installed; 10.1.3.4.2 would be a better choice. Teradata odbc install requires root for Solaris pkgadd Only 1 version of Teradata odbc can be installed.symbolic links to the current version are created in /usr/lib at install obiee implementation background database access has two types of implementation:  native and odbcnative drivers use DB vendor client interfaces for accessodbc drivers are provided by the DB vendor for DB accessTeradata is an odbc interface Database. odbc drivers require an ODBC Driver Managerobiee uses Merant Data Direct driver manager obiee servers communicate with one another using odbc.The internal odbc driver is implemented by the obiee team and requires Merant Driver Manager. Teradata supplies a Driver Manager, which is built by Merant, but should not be used in obiee. The nqsserver shared library deployment looks like this  OBIEE Server<->DataDirect Manager<->Teradata Driver<->Teradata Database nqsserver startup $ cd $BI/setup$ . ./sa-init64.sh$ run-sa.sh autorestart64 The following files are referenced from setup:  .variant.sh  user.sh  NQSConfig.INI  DBFeatures.INI  $ODBCINI ( odbc.ini )  sqlnet.ora How does nqsserver connect to Teradata? A teradata DSN is created in the RPD. ( TD71 )setup/odbc.ini contains: [ODBC Data Sources] TD71=tdata.so[TD71]Driver=/opt/tdodbc/odbc/drivers/tdata.soDescription=Teradata V7.1.0DBCName=###.##.##.### LastUser=Username=northwindPassword=northwindDatabase=DefaultDatabase=northwind setup/user.sh contains LIBPATH\=/opt/tdicu/lib_64\:/usr/odbc/lib\:/usr/odbc/drivers\:/usr/lpp/tdodbc/odbc/drivers\:$LIBPATHexport LIBPATH   setup/.variant.sh contains if [ "$ANA_SERVER_64" = "1" ]; then  ANA_BIN_DIR=${SAROOTDIR}/server/Bin64  ANA_WEB_DIR=${SAROOTDIR}/web/bin64  ANA_ODBC_DIR=${SAROOTDIR}/odbc/lib64         setup/sa-run.sh  contains . ${ANA_INSTALL_DIR}/setup/.variant.sh. ${ANA_INSTALL_DIR}/setup/user.sh logfile="${SAROOTDIR}/server/Log/nqsserver.out.log"${ANA_BIN_DIR}/nqsserver -quiet >> ${logfile} 2>&1 &   nqsserver is running: nqsserver produces $BI/server/nqsserver.logAt startup, the native database drivers connect and record DB versions.tdata.so is not loaded until a Teradata DB connection is attempted.    Teradata odbc client installation Accept all the defaults for pkgadd.   Install in /opt. $ mkdir odbc$ cd odbc$ gzip -dc ../tdodbc__solaris_sparc.13.10.00.04.tar.gz | tar -xf - $ sudo su# pkgadd -d . TeraGSS# pkgadd -d . tdicu1310# pkgadd -d . tdodbc1310   Directory Notes: /opt/teradata/client/13.10/odbc_64/lib/tdata.soThe 64-bit obiee library loaded by nqsserver. /opt/teradata/client/13.10/odbc_64/lib is not needed in LD_LIBRARY_PATH /opt/teradata/client/13.10/tdicu/lib64is needed in LD_LIBRARY_PATH /usr/odbc should not be referenced;  it is a link to 32-bit libraries LD_LIBRARY_PATH_64 should not be used.     Useful bash functions and aliases export SAROOTDIR=/export/home/dw_adm/OracleBIexport TERA_HOME=/opt/teradata/client/13.10 export ORACLE_HOME=/export/home/oracle/product/10.2.0/clientexport ODBCINI=$SAROOTDIR/setup/odbc.iniexport TD_ICU_DATA=$TERA_HOME/tdicu/lib64alias cds="alias | grep '^alias cd' | sed 's/^alias //' | sort"alias cdtd="cd $TERA_HOME; ls" alias cdtdodbc="cd $TERA_HOME/odbc_64; ls -l"alias cdtdicu="cd $TERA_HOME/tdicu/lib64; ls -l"alias cdbi="cd $SAROOTDIR; ls"alias cdbiodbc="cd $SAROOTDIR/odbc; ls -l"alias cdsetup="cd $SAROOTDIR/setup; ls -ltr"alias cdsvr="cd $SAROOTDIR/server; ls"alias cdrep="cd $SAROOTDIR/server/Repository; ls -ltr"alias cdsvrcfg="cd $SAROOTDIR/server/Config; ls -ltr"alias cdsvrlog="cd $SAROOTDIR/server/Log; ls -ltr"alias cdweb="cd $SAROOTDIR/web; ls"alias cdwebconfig="cd $SAROOTDIR/web/config; ls -ltr"alias cdoci="cd $ORACLE_HOME; ls"pkgfiles() { pkgchk -l $1 | awk  '/^Pathname/ {print $2}'; }pkgfind()  { pkginfo | egrep -i $1 ; } Examples: $ pkgfind td$ pkgfiles tdodbc1310 | grep 64$ cds$ cdtdodbc$ cdsetup$ cdsvrlog$ cdweblog

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • jquery ui draggable elements not 'draggable' outside of scrolling div

    - by Stu
    hello all, i am super stumped. i have many elements (floating href tags) in a div with a set height/width, with scroll set to "overflow: auto" in the css. this is the structure of the divs: <div id="tagFun_div_main"> <div id="tf_div_tagsReturn"> <!-- all the draggable elements go in here, the parent div scolls --> </div> <div id=" tf_div_tagsDrop"> <div id="tf_dropBox"></div> </div></div> the parent div's, 'tf_div_tagsReturn' and 'tf_div_tagsDrop' will ultimately float next to each other. here is the jquery which is run after all of the 'draggable' elements have been created with class name 'tag_cell', : $(function() { $(".tag_cell").draggable({ revert: 'invalid', scroll: false, containment: '#tagFun_div_main' }); $("#tf_dropBox").droppable({ accept: '.tag_cell', hoverClass: 'tf_dropBox_hover', activeClass: 'tf_dropBox_active', drop: function(event, ui) { GLOBAL_ary_tf_tags.push(ui.draggable.html()); tagFun_reload(); } }); }); as i stated above, the draggable elements are draggable within div 'tf_div_tagsReturn', but they do not visually drag outside of that parent div. worthy to note, if i am dragging one of the draggable elements, and move the mouse over the droppable div, with id 'tf_dropBox', then the hoverclass is fired, i just can't see the draggable element any more. thank you very much for any advice on helping me find a solution. this is my first run at using jquery, so hopefully i am just missing something super obvious. i've been reading the documentation and searching forums thus far to no prevail :( thank you for your time. UPDATE: many thanks to Jabes88 for providing the solution which allowed me to achieve the functionality i was looking for, here is what my jquery ended up looking like, feel free to critique it, as i am new to jquery. $(function() { $(".tag_cell").draggable({ revert: 'invalid', scroll: false, containment: '#tagFun_div_main', helper: 'clone', start : function() { this.style.display="none"; }, stop: function() { this.style.display=""; } }); $(".tf_dropBox").droppable({ accept: '.tag_cell', hoverClass: 'tf_dropBox_hover', activeClass: 'tf_dropBox_active', drop: function(event, ui) { GLOBAL_ary_tf_tags.push(ui.draggable.html()); tagFun_reload(); } }); });

    Read the article

  • why jQuery.parseJSON is not a function?

    - by Pandiya Chendur
    I use the following jquery statements and i am getting the error, jQuery.parseJSON is not a function my function is, function Iteratejsondata() {var HfJsonValue = { "Table": [{ "Emp_Id": "3", "Identity_No": "", "Emp_Name": "Jerome", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Supervisior", "Desig_Description": "Supervisior of the Construction", "SalaryBasis": "Monthly", "FixedSalary": "25000.00" }, { "Emp_Id": "4", "Identity_No": "", "Emp_Name": "Mohan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Acc ", "Desig_Description": "Accountant", "SalaryBasis": "Monthly", "FixedSalary": "200.00" }, { "Emp_Id": "5", "Identity_No": "", "Emp_Name": "Murugan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "150.00" }, { "Emp_Id": "6", "Identity_No": "", "Emp_Name": "Ram", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "120.00" }, { "Emp_Id": "7", "Identity_No": "", "Emp_Name": "Raja", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason", "Desig_Description": "Mason", "SalaryBasis": "Weekly", "FixedSalary": "135.00" }, { "Emp_Id": "8", "Identity_No": "", "Emp_Name": "Raja kumar", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "9", "Identity_No": "", "Emp_Name": "Lakshmi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Mason Helper", "Desig_Description": "Mason Helper", "SalaryBasis": "Weekly", "FixedSalary": "100.00" }, { "Emp_Id": "10", "Identity_No": "", "Emp_Name": "Palani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "200.00" }, { "Emp_Id": "11", "Identity_No": "", "Emp_Name": "Annamalai", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Carpenter", "Desig_Description": "Carpenter", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "12", "Identity_No": "", "Emp_Name": "David", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "13", "Identity_No": "", "Emp_Name": "Chandru", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Fixer", "Desig_Description": "Steel Fixer", "SalaryBasis": "Weekly", "FixedSalary": "220.00" }, { "Emp_Id": "14", "Identity_No": "", "Emp_Name": "Mani", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Steel Helper", "Desig_Description": "Steel Helper", "SalaryBasis": "Weekly", "FixedSalary": "175.00" }, { "Emp_Id": "15", "Identity_No": "", "Emp_Name": "Karthik", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "195.00" }, { "Emp_Id": "16", "Identity_No": "", "Emp_Name": "Bala", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Fixer", "Desig_Description": "Wood Fixer", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "17", "Identity_No": "", "Emp_Name": "Tamil arasi", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Wood Helper", "Desig_Description": "Wood Helper", "SalaryBasis": "Weekly", "FixedSalary": "185.00" }, { "Emp_Id": "18", "Identity_No": "", "Emp_Name": "Perumal", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Cook", "Desig_Description": "Cook", "SalaryBasis": "Weekly", "FixedSalary": "105.00" }, { "Emp_Id": "19", "Identity_No": "", "Emp_Name": "Andiappan", "Address": "Madurai", "Date_Of_Birth": "", "Desig_Name": "Watchman", "Desig_Description": "Watchman", "SalaryBasis": "Weekly", "FixedSalary": "150.00"}] }; //var jsonObj = eval('(' + HfJsonValue + ')'); var jsonObj = jQuery.parseJSON(HfJsonValue); and my page looks like this <div id="Pagination" class="page-numbers"></div> <br style="clear:both;" /> <div id="Searchresult"></div> <div id="hiddenresult" style="display:none;"> </div> <script type="text/javascript"> var pagination_options = { num_edge_entries: 2, num_display_entries: 8, callback: pageselectCallback, items_per_page: 3 } function pageselectCallback(page_index, jq) { var items_per_page = pagination_options.items_per_page; var offset = page_index * items_per_page; var new_content = $('#hiddenresult div.resultsdiv').slice(offset, offset + items_per_page).clone(); $('#Searchresult').empty().append(new_content); return false; } function initPagination() { var num_entries = $('#hiddenresult div.resultsdiv').length; // Create pagination element $("#Pagination").pagination(num_entries, pagination_options); } $(document).ready(function() { Iteratejsondata(); initPagination(); }); </script> I ve inspected through firebug and saw all jquery files have been downloaded but why this is hapenning? Any suggestion....

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • click events issue in jquery.

    - by alex
    I don't know if the title explain this situation well. Below is a code i wrote to create div elements when the button is pressed. Then By clicking on any of the created divs, we can change the div background by choosing a color from the drop down box. However if i click on another div and tried to change the color by selecting another color from the drop down, the previously clicked divs also gets affected by the new color. Why is this happening. I only want the selected div to change color, without affecting the previously clicked divs. I read allot of threads on this site, some of which talks about unbinding clicks, but I'm unable to solve the problem. Thanks for the help. <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" type="text/javascript"></script> <style> .aaa { width:100px; height:100px;background-color:#ccc;margin-bottom:5px;} p{widht:500px; height:500px; margin:0px; padding:0px;border:2px solid red;color:#000;} </style> <select id="item_select" name="item"> <option value="1">red</option> <option value="2">blue</option> <option value="3">green</option> </select> <button>Click to create divs</button> <p id="ptest"></p> <script type="text/javascript"> var dividcount = 1; $("button").click(run); function run(){ var myDiv = $('<div id=lar'+dividcount+' class=aaa></div>'); $(myDiv).clone().appendTo('#ptest'); $(dividcount++); $("div").bind('click',(function() { var box = $("div").index(this); var idd = (this.id); $("#"+idd).text("div #"+idd); $("select").change(function(){ var str= $("select option:selected").text(); $("#"+idd).css("background-color",str); }); })); }; </script>

    Read the article

  • What IDE to use for Python

    - by husayt
    As a Python newbie, it is interesting to know what IDE's ("GUIs/editors") others use for Python coding. If you can just give the name (e.g. Textpad, Eclipse ..) that will be enough. If it is already mentioned, you can just vote for it. But if you can also give some more comparative information, that will be much appreciated. Thanks. Update: Results so far PyDev with Eclipse (CP, F, AC, PD, EM, SI, MLS, UML, SC, UT, LN, CF, BM) Komodo (CP, C/F, MLS, PD, AC, SC, SI, BM, LN, CF, CT) Emacs (CP, F, AC, MLS, PD, EM, SC, SI, BM, LN, CF, CT, UT, UML) Vim (CP, F, AC, MLS, SI, BM, LN, CF ) TextMate (Mac, CT, CF, MLS, SI, BM, LN) Gedit (Linux, F, AC, MLS, BM, LN, CT [sort of]) Idle (CP, F, AC) PIDA (Linux, CP, F, AC, MLS, SI, BM, LN, CF)(VIM Based) NotePad++ (Windows) BlueFish (Linux) JEdit (CP, F, BM, LN, CF, MLS) E-Texteditor (TextMate Clone for Windows) WingIde (CP, C, AC, MLS (support for C), PD, EM, SC, SI, BM, LN, CF, CT, UT) Eric Ide (CP, F, AC, PD, EM, SI, LN, CF, UT) Pyscripter (Windows, F, AC, PD, EM, SI, LN, CT, UT) ConTEXT (Windows, C) SPE (F, AC, UML) SciTE (CP, F, MLS, EM, BM, LN, CF, CT, SH) Zeus (W, C, BM, LN, CF, SI, SC, CT) NetBeans (CP, F, PD, UML, AC, MLS, SC, SI, BM, LN, CF, CT, UT, RAD) DABO (CP) BlackAdder (C, CP, CF, SI) PythonWin (W, F, AC, PD, SI, BM, CF) Geany (CP, F, very limited AC, MLS, SI, BM, LN, CF) UliPad (CP, F, AC, PD, MLS, SI, LI, CT, UT, BM) Boa Constructor (CP, F, AC, PD, EM, SI, BM, LN, UML, CF, CT) ScriptDev (W, C, AC, MLS, PD, EM, SI, BM, LN, CF, CT) Spider (CP, F, AC) Editra (CP, F, AC, MLS, SC, SI, BM, LN, CF) Pfaide (Windows, C, AC, MLS, SI, BM, LN, CF, CT) KDevelop (CP, F, MLS, SC, SI, BM, LN, CF) Acronyms used: CP - Cross Platfom C - Commercial F - Free AC - Automatic Code-completion MLS - Multi-Language Support PD - Integrated Python Debugging EM - ErrorMarkup SC - Source Control integration SI - Smart Indent BM - Bracket Matching LN - Line Numbering UML - UML editing / viewing CF - Code Folding CT - Code Templates UT - Unit Testing UID - Gui Designer (e.g. QT, Eric, ..) DB - integrated database support RAD - Rapid app development support I don't mention basics like Syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like autocomplete, debugging, or etc)?

    Read the article

  • How can I extract paragaphs and selected lines with Perl?

    - by neversaint
    I have a text where I need to: to extract the whole paragraph under the section "Aceview summary" until the line that starts with "Please quote" (not to be included). to extract the line that starts with "The closest human gene". to store them into array with two elements. The text looks like this (also on pastebin): AceView: gene:1700049G17Rik, a comprehensive annotation of human, mouse and worm genes with mRNAs or ESTsAceView. <META NAME="title" CONTENT=" AceView: gene:1700049G17Rik a comprehensive annotation of human, mouse and worm genes with mRNAs or EST"> <META NAME="keywords" CONTENT=" AceView, genes, Acembly, AceDB, Homo sapiens, Human, nematode, Worm, Caenorhabditis elegans , WormGenes, WormBase, mouse, mammal, Arabidopsis, gene, alternative splicing variant, structure, sequence, DNA, EST, mRNA, cDNA clone, transcript, transcription, genome, transcriptome, proteome, peptide, GenBank accession, dbest, RefSeq, LocusLink, non-coding, coding, exon, intron, boundary, exon-intron junction, donor, acceptor, 3'UTR, 5'UTR, uORF, poly A, poly-A site, molecular function, protein annotation, isoform, gene family, Pfam, motif ,Blast, Psort, GO, taxonomy, homolog, cellular compartment, disease, illness, phenotype, RNA interference, RNAi, knock out mutant expression, regulation, protein interaction, genetic, map, antisense, trans-splicing, operon, chromosome, domain, selenocysteine, Start, Met, Stop, U12, RNA editing, bibliography"> <META NAME="Description" CONTENT= " AceView offers a comprehensive annotation of human, mouse and nematode genes reconstructed by co-alignment and clustering of all publicly available mRNAs and ESTs on the genome sequence. Our goals are to offer a reliable up-to-date resource on the genes, their functions, alternative variants, expression, regulation and interactions, in the hope to stimulate further validating experiments at the bench "> <meta name="author" content="Danielle Thierry-Mieg and Jean Thierry-Mieg, NCBI/NLM/NIH, [email protected]"> <!-- var myurl="av.cgi?db=mouse" ; var db="mouse" ; var doSwf="s" ; var classe="gene" ; //--> However I am stuck with the following script logic. What's the right way to achieve that? #!/usr/bin/perl -w my $INFILE_file_name = $file; # input file name open ( INFILE, '<', $INFILE_file_name ) or croak "$0 : failed to open input file $INFILE_file_name : $!\n"; my @allsum; while ( <INFILE> ) { chomp; my $line = $_; my @temp1 = (); if ( $line =~ /^ AceView summary/ ) { print "$line\n"; push @temp1, $line; } elsif( $line =~ /Please quote/) { push @allsum, [@temp1]; @temp1 = (); } elsif ($line =~ /The closest human gene/) { push @allsum, $line; } } close ( INFILE ); # close input file # Do something with @allsum There are many files like that I need to process.

    Read the article

  • Nested Sortable JQuery list doesn't work in IE while it does in FF

    - by Ben Fransen
    Hello everybody, While I'm using this site quite often as a resource for jQuery related problems I can't seem to find an answer this time. So here is my first post. During daytime at work I'm developing an informationsystem for generating MS Word documents. One of the modules I'm developing is for defining a default chapterselection for the table of contents and selecting texts by the given chapters. A user can submit new chapters and if necessary, add them as childchapter to a parent, flag them as 'adopt in TOC' and link a default text (from another module) to the chapter. My chapterlist is retreived recursively from a MySQL table and could look something like this: <ul class="sortableChapterlist"> <li>1</li> <li>2 <ul class="sortableChapterlist"> <li>2.1</li> <li>2.2</li> <li>2.3 <ul class="sortableChapterlist"> <li>2.3.1</li> <li>2.3.2</li> </ul> </li> </ul> </li> </ul> In FireFox the code works like a charm but IE (7) doesn't seem to like it that much. I'm only able to correctly drag arround the mainchapters. When attempting to drag a subchapter, no matter it's level, the correspondending mainchapter lifts up with some child, never all. This is the jQuery code I'm using to accomplish the task: $(function(){ $(".sortableChapterlist").sortable({ opacity: 0.7, helper: 'clone', cursor: 'move' }); $(".sortableChapterlist").selectable(); $(".sortableChapterlist").disableSelection(); }); Does anybody have some ideas about this? I'm guessing IE kinda falls over the multiple class reference "chapter_list" in combination with jQuery trying to handle the draggable/sortable. Kinds regards from Holland, Ben Fransen

    Read the article

  • Django + FastCGI - randomly raising OperationalError

    - by ibz
    I'm running a Django application. Had it under Apache + mod_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings. Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0. Any ideas where to look for? PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query. Thanks. File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root if not self.has_permission(request): File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission return request.user.is_authenticated() and request.user.is_staff File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__ request._cached_user = get_user(request) File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user user_id = request.session[SESSION_KEY] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__ return self._session[key] File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session self._session_cache = self.load() File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load expire_date__gt=datetime.datetime.now() File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator for row in self.query.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql cursor.execute(sql, params) OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

    Read the article

  • How to call function that inside JQuery Plugin From outside the plugin?

    - by CaTz
    hi, i am using textarea elastic plugin JQuery. this is the plugin (function(jQuery){ jQuery.fn.extend({ elastic: function() { // We will create a div clone of the textarea // by copying these attributes from the textarea to the div. var mimics = [ 'paddingTop', 'paddingRight', 'paddingBottom', 'paddingLeft', 'fontSize', 'lineHeight', 'fontFamily', 'width', 'fontWeight']; return this.each( function() { // Elastic only works on textareas if ( this.type != 'textarea' ) { return false; } var $textarea = jQuery(this), $twin = jQuery('<div />').css({'position': 'absolute','display':'none','word-wrap':'break-word'}), lineHeight = parseInt($textarea.css('line-height'),10) || parseInt($textarea.css('font-size'),'10'), minheight = parseInt($textarea.css('height'),10) || lineHeight*3, maxheight = parseInt($textarea.css('max-height'),10) || Number.MAX_VALUE, goalheight = 0, i = 0; // Opera returns max-height of -1 if not set if (maxheight < 0) { maxheight = Number.MAX_VALUE; } // Append the twin to the DOM // We are going to meassure the height of this, not the textarea. $twin.appendTo($textarea.parent()); // Copy the essential styles (mimics) from the textarea to the twin var i = mimics.length; while(i--){ $twin.css(mimics[i].toString(),$textarea.css(mimics[i].toString())); } // Sets a given height and overflow state on the textarea function setHeightAndOverflow(height, overflow){ curratedHeight = Math.floor(parseInt(height,10)); if($textarea.height() != curratedHeight){ $textarea.css({'height': curratedHeight + 'px','overflow':overflow}); } } // This function will update the height of the textarea if necessary function update() { // Get curated content from the textarea. var textareaContent = $textarea.val().replace(/&/g,'&amp;').replace(/ /g, '&nbsp;').replace(/<|>/g, '&gt;').replace(/\n/g, '<br />'); var twinContent = $twin.html(); if(textareaContent+'&nbsp;' != twinContent){ // Add an extra white space so new rows are added when you are at the end of a row. $twin.html(textareaContent+'&nbsp;'); // Change textarea height if twin plus the height of one line differs more than 3 pixel from textarea height if(Math.abs($twin.height()+lineHeight/3 - $textarea.height()) > 3){ var goalheight = $twin.height()+lineHeight/3; if(goalheight >= maxheight) { setHeightAndOverflow(maxheight,'auto'); } else if(goalheight <= minheight) { setHeightAndOverflow(minheight,'hidden'); } else { setHeightAndOverflow(goalheight,'hidden'); } } } } // Hide scrollbars $textarea.css({'overflow':'hidden'}); // Update textarea size on keyup $textarea.keyup(function(){ update(); }); $textarea.focus(function(){ update(); }); // And this line is to catch the browser paste event $textarea.live('input paste',function(e){ setTimeout( update, 250); }); // Run update once when elastic is initialized update(); }); } }); })(jQuery); How can i call from the outside of the plugin to the update function that is inside?

    Read the article

  • Java ExecutorService java heap space ptoblems

    - by Sergey Aganezov jr
    I have a little bit of a problem in a multitasking java department. I have a class, called public class ThreadWorker implements Runnable { //some code in here public void run(){ // invokes some recursion method in the ThreadWorker itself, // which will stop eventually { } all in all, pretty simple "worker" that can work on it's on. To work with threads I'm using public static int THREAD_NUMBER = 4; public static ExecutorServide es = Executors.newFixedThreadPool(THREAD_NUMBER); adding instances of ThreadWroker class happens here: public void recursiveMethod(Arraylist<Integers> elements, MyClass data){ if (elements.size() == 0 && data.qualifies()){ ThreadWorker tw = new ThreadWorker(data); es.execute(tw); return; } for (int i=0; i< elements.size(); i++){ // some code to prevent my problem MyClass data1 = new MyClass(data); MyClass data2 = new MyClass(data); ArrayList<Integer> newElements = (ArrayList<Integer>)elements.clone(); data1.update(elements.get(i)); data2.update(-1 * elements.get(i)); newElements.remove(i); recursiveMethod(newElements, data1); recursiveMethod(newElements, data2); { } and the problem is that the depth of the recursion tree is quite big, so as it's width, so a lot of ThreadWorkers are added to the ExecutorService, so after some time on the big input a get Exception in thread "pool-1-thread-2" java.lang.OutOfMemoryError: Java heap space which is caused, as I think because of a ginormous number of ThreadWorkers i'm adding to ExecutorSirvice to be executed, so it runs out of memory. Every ThreadWorker takes about 40 Mb of RAM for all it needs. Is there a method to get how many threads (instances of classes implementing runnable interface) have been added to ExecutorService? So I can add it in the shown above code (int the " // some code to prevent my problem"), as while ("number of threads in the ExecutorService" > 10){ Thread.sleep(10000); } so I won't go to deep or to broad with my recursion and prevent those exception-throwing situations. Sincerely, Sergey Aganezov jr.

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • .NET OCRing an Image

    - by Kirschstein
    I'm trying to use MODI to OCR a window's program. It works fine for screenshots I grab programmatically using win32 interop like this: public string SaveScreenShotToFile() { RECT rc; GetWindowRect(_hWnd, out rc); int width = rc.right - rc.left; int height = rc.bottom - rc.top; Bitmap bmp = new Bitmap(width, height); Graphics gfxBmp = Graphics.FromImage(bmp); IntPtr hdcBitmap = gfxBmp.GetHdc(); PrintWindow(_hWnd, hdcBitmap, 0); gfxBmp.ReleaseHdc(hdcBitmap); gfxBmp.Dispose(); string fileName = @"c:\temp\screenshots\" + Guid.NewGuid().ToString() + ".bmp"; bmp.Save(fileName); return fileName; } This image is then saved to a file and ran through MODI like this: private string GetTextFromImage(string fileName) { MODI.Document doc = new MODI.DocumentClass(); doc.Create(fileName); doc.OCR(MODI.MiLANGUAGES.miLANG_ENGLISH, true, true); MODI.Image img = (MODI.Image)doc.Images[0]; MODI.Layout layout = img.Layout; StringBuilder sb = new StringBuilder(); for (int i = 0; i < layout.Words.Count; i++) { MODI.Word word = (MODI.Word)layout.Words[i]; sb.Append(word.Text); sb.Append(" "); } if (sb.Length > 1) sb.Length--; return sb.ToString(); } This part works fine, however, I don't want to OCR the entire screenshot, just portions of it. I try cropping the image programmatically like this: private string SaveToCroppedImage(Bitmap original) { Bitmap result = original.Clone(new Rectangle(0, 0, 250, 250), original.PixelFormat); var fileName = "c:\\" + Guid.NewGuid().ToString() + ".bmp"; result.Save(fileName, original.RawFormat); return fileName; } and then OCRing this smaller image, however MODI throws an exception; 'OCR running error', the error code is -959967087. Why can MODI handle the original bitmap but not the smaller version taken from it?

    Read the article

  • How can I highlight empty fields in ASP.NET MVC 2 before model binding has occurred?

    - by Richard Poole
    I'm trying to highlight certain form fields (let's call them important fields) when they're empty. In essence, they should behave a bit like required fields, but they should be highlighted if they are empty when the user first GETs the form, before POST & model validation has occurred. The user can also ignore the warnings and submit the form when these fields are empty (i.e. empty important fields won't cause ModelState.IsValid to be false). Ideally it needs to work server-side (empty important fields are highlighted with warning message on GET) and client-side (highlighted if empty when losing focus). I've thought of a few ways of doing this, but I'm hoping some bright spark can come up with a nice elegant solution... Just use a CSS class to flag important fields Update every view/template to render important fields with an important CSS class. Write some jQuery to highlight empty important fields when the DOM is ready and hook their blur events so highlights & warning messages can be shown/hidden as appropriate. Pros: Quick and easy. Cons: Unnecessary duplication of importance flags and warning messages across views & templates. Clients with JavaScript disabled will never see highlights/warnings. Custom data annotation and client-side validator Create classes similar to RequiredAttribute, RequiredAttributeAdapter and ModelClientValidationRequiredRule, and register the adapter with DataAnnotationsModelValidatorProvider.RegisterAdapter. Create a client-side validator like this that responds to the blur event. Pros: Data annotation follows DRY principle (Html.ValidationMessageFor<T> picks up field importance and warning message from attribute, no duplication). Cons: Must call TryValidateModel from GET actions to ensure empty fields are decorated. Not technically validation (client- & server-side rules don't match) so it's at the mercy of framework changes. Clients with JavaScript disabled will never see highlights/warnings. Clone the entire validation framework It strikes me that I'm trying to achieve exactly the same thing as validation but with warnings rather than errors. It needs to run before model binding (and therefore validation) has occurred. Perhaps it's worth designing a similar framework with annotations like Required, RegularExpression, StringLength, etc. that somehow cause Html.TextBoxFor<T> etc. to render the warning CSS class and Html.ValidationMessageFor<T> to emit the warning message and JSON needed to enable client-side blur checks. Pros: Sounds like something MVC 2 could do with out of the box. Cons: Way too much effort for my current requirement! I'm swaying towards option 1. Can anyone think of a better solution?

    Read the article

  • Git repo planning questions

    - by masonk
    At work, development uses perforce to handle code sharing. I won't say "revision control", because we aren't allowed to check in changes until they are ready for regression testing. In order to get my personal change sets under revision control, I've been given the go-ahead to build my own git and initialize the client view of the perforce depot as a git repo. There are some difficulties in doing this, however. The client view lives in a subfolder of ~, (~/p4), and I want to put ~ under revision control as well, with its own separate history. I can't figure out how to keep the history for ~ separate from ~/p4 without using a submodule. The problem with a submodule is that it looks like I have to go make a repository that will become the submodule and then git submodule add <repo> <path>. But there is nowhere to make the submodule's repository except in ~. There seems to be no safe place to create the initial client view of the depot with git p4 clone. (I'm working off of the assumption that initing or cloning a repo into a subdirectory of a git repo is not supported. At least, I can find nothing authoritative on nested git repos.) edit: Is merely ignoring ~/p4 in the repo rooted at ~ enough to allow me to init a nested repo in ~/p4? My __git_ps1 function still thinks I'm in a git repository when I visit an ignored subdirectory of a git repo, so I'm inclined to think not. I need the "remote" repository created by git p4 sync to be a branch in ~/p4. We are required to keep all of our code in ~/p4 so that it doesn't get backed up. Can I pull from a "remote" branch that is really a local branch? This one is just for convenience, but I thought I could learn something by asking it. For 99% of the project, I just want to start the with the p4 head revision as the inital commit object. For the other 1%, I would like to suck down the entire p4 history so that I can browse it in git. IOW, after I'm done initalizing it, the initial commit of remotes/p4/master branch will contain: revision 1 of //depot/prod/Foo/Bar/* revision X of other files in //depot/prod/*, where X is the head revision and the remotes/p4/master branch contains Y commits, where Y is the number of changelists that had a file in //depot/prod/Foo/Bar/*, with each commit in the history corresponding to one of those p4 changelists, and HEAD looking like p4's head.

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68  | Next Page >