Search Results

Search found 15704 results on 629 pages for 'block world'.

Page 108/629 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • clear explanation sought: throw() and stack unwinding

    - by Jerry Gagelman
    I'm not a programmer but have learned a lot watching others. I am writing wrapper classes to simplify things with a really technical API that I'm working with. Its routines return error codes, and I have a function that converts those to strings: static const char* LibErrString(int errno); For uniformity I decided to have member of my classes throw an exception when an error is encountered. I created a class: struct MyExcept : public std::exception { const char* errstr_; const char* what() const throw() {return errstr_;} MyExcept(const char* errstr) : errstr_(errstr) {} }; Then, in one of my classes: class Foo { public: void bar() { int err = SomeAPIRoutine(...); if (err != SUCCESS) throw MyExcept(LibErrString(err)); // otherwise... } }; The whole thing works perfectly: if SomeAPIRoutine returns an error, a try-catch block around the call to Foo::bar catches a standard exception with the correct error string in what(). Then I wanted the member to give more information: void Foo::bar() { char adieu[128]; int err = SomeAPIRoutine(...); if (err != SUCCESS) { std::strcpy(adieu,"In Foo::bar... "); std::strcat(adieu,LibErrString(err)); throw MyExcept((const char*)adieu); } // otherwise... } However, when SomeAPIRoutine returns an error, the what() string returned by the exception contains only garbage. It occurred to me that the problem could be due to adieu going out of scope once the throw is called. I changed the code by moving adieu out of the member definition and making it an attribute of the class Foo. After this, the whole thing worked perfectly: a try-call block around a call to Foo::bar that catches an exception has the correct (expanded) string in what(). Finally, my question: what exactly is popped off the stack (in sequence) when the exception is thrown in the if-block when the stack "unwinds?" As I mentioned above, I'm a mathematician, not a programmer. I could use a really lucid explanation of what goes onto the stack (in sequence) when this C++ gets converted into running machine code.

    Read the article

  • URL Rewrite, ServerVariables, URL Parts, HTTP to HTTPS Redirect. Week 9

    - by OWScott
    Last week I gave an intro to URL Rewrite; covering the basics and giving a real world example.  This week I dive in deeper and cover ServerVariables, the parts that make up the URL and another real world example of redirecting HTTP to HTTPS. This is week 9 of a 52 week series on various web administration related tasks.  Past and future videos can be found here. For reference, in the video I mentioned the following two blog posts: Viewing ServerVariables For a Site Parts of the URL available to URL Rewrite

    Read the article

  • How to use html and JavaScript in Content Editor web part in SharePoint2010

    - by ybbest
    Here are the steps you need to take to use html and JavaScript in content editor web part. 1. Edit a site page and add a content editor web part on the page. 2. After the content editor is added to the page, it will display on the page like shown below 3. Next, upload your html and JavaScript content as a text file to a document library inside your SharePoint site. Here is the content in the document <script type="text/javascript"> alert("Hello World"); </script> 4. Edit the content editor web part and reference the file you just uploaded. 5. Save the page and you will see the hello world prompt. References: http://stackoverflow.com/questions/5020573/sharepoint-2010-content-editor-web-part-duplicating-entries http://sharepointadam.com/2010/08/31/insert-javascript-into-a-content-editor-web-part-cewp/

    Read the article

  • Cheap, Awesome, Programmer-friendly City in Europe for 1 year Study Hiatus?

    - by Gonjasufi
    Next year I'll be 21. I'll have 3 years of professional experience under my belt (with a one year break as a soldier). I'm planning to take 2 to 3 years off. Instead of going to a university I'm planning to work on personal projects and learn on my own. I'm looking for suggestions of great, cheap, programmer-friendly (e.g. lots of cafes, ordered food, parks, blazing fast internet connection, wifi, lots of people that speak English) cities around the world, (and specifically in Europe as I also have european citizenship). If you can supply with an estimate cost of living for that city, or a site for comparisons that will also be great. edit: I'm living in Tel Aviv, ~20 highest cost of living city in the world, so statistically speaking almost all the cities are cheaper.

    Read the article

  • Adding new events to be handled by script part without recompilation of c++ source code

    - by paul424
    As in title I want to write an Event system with handling methods written in external script language, that is Angelscript. The AS methods would have acess to game's world modifing API ( which has to be regsitered for Angelscript Machine as the game starts) . I have come to this problem : if we use the Angelsript for rapid prototyping of the game's world behavior , we would like to be able to define new Event Types, without recompiling the main binary. Is that ever possible , don't stick to Angelscript, I think about any possible scripting language. The main thought is this : If there's some really new Event possible to be constructed , we must monitor some information coming from the binary, for example some variable value changing and the Events are just triggered on some conditions on those changes , wer might be monitoring the Creatures HP, by having a function call on each change, there we could define new Events such as Creature hurt, creature killed, creature healed , creature killed and tormented to pieces ( like geting HP very much below 0 ) . Are there any other ideas of constructing new Events at the scripting side ?

    Read the article

  • Oracle CloudWorld: Social, Mobile, Complete.

    - by Dana Singleterry
    Mobile, social, and cloud are redefining how business gets done. Discover what your company can do to take advantage of these new opportunities and gain a competitive advantage at Oracle CloudWorld. Experience insightful keynotes, real-world case studies, in-depth Q&A sessions, hands-on demos, face-to-face networking, and dedicated tracks for different audiences.In one day, you'll learn how Oracle Cloud can transform your organization. View Streams and Sessions. First event is coming soon: Dubai, UAE, January 15 followed by Los Angeles, US, January 29. Check out all the events around the world and find one that fits your schedule. Register Now!

    Read the article

  • Oracle CloudWorld: Social, Mobile, Complete.

    - by Dana Singleterry
    Mobile, social, and cloud are redefining how business gets done. Discover what your company can do to take advantage of these new opportunities and gain a competitive advantage at Oracle CloudWorld. Experience insightful keynotes, real-world case studies, in-depth Q&A sessions, hands-on demos, face-to-face networking, and dedicated tracks for different audiences. In one day, you'll learn how Oracle Cloud can transform your organization. View Streams and Sessions. First event is coming soon: Dubai, UAE, January 15 followed by Los Angeles, US, January 29. Check out all the events around the world and find one that fits your schedule. Register Now!

    Read the article

  • Do backlinks to blocked content add value?

    - by David Fisher
    We've been debating the following SEO question at our office: If you block bot access to a page either via robots.txt or on-page noindex metadata, does that negate the value of any backlinks to that page? We have a client who wants to block some event booking form pages from being indexed as each booking form page has a unique URL parameter and the pages are "clogging up" the Google index; however lots of websites link to those booking form pages and we wouldn't want to lose the value of those links. Any opinions welcomed.

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • What should we tell our unsupported IE6 users?

    - by Dan Fabulich
    In the upcoming version of our web app, we've broken IE6, and we don't intend to fix it. We've had a clear warning posted for IE6 users for some months; we've decided it's time not to support it. My question is: how should we communicate this to our users? Some people here feel that we should block IE6 users who would try to access the web app, because it's not going to work for them. Others feel that we should just leave up a warning, saying "This doesn't work in IE6," but not block them; instead, if they click to dismiss the warning, just let them in to the broken site to see for themselves that it doesn't work. Who is right? Is there a better way?

    Read the article

  • Web Developer or Software Engineer?

    - by dahacker89
    A question that I have been asking myself and really confused which path to take. So I need your guys help as to the pros and cons of these 2 professions in today's world. I love web applications development as the Web is the best thing to happen in this age and nearly everyone gets by on the World Wide Web. And also tend to keep learning about new technologies and about web services. On the other hand I like software engineering also for the desktop applications as I have had experience with development small scale softwares in VB.Net, Java, C++, etc. Which path has more scope and better future? Whats your view?

    Read the article

  • Java Spotlight Episode 111: Bruno Souza @brjavaman and Fabiane Nardon @fabianenardonon StoryTroop @storytroop

    - by Roger Brinkley
    Interview with Bruno Souza and Fabiane Nardon on StoryTroop. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News End of Puplic Updates for JDK 6 Bean Valdiation 1.1 public review approved Two key JSRs accepted in time for JavaEE7 Public_JCP EC_meeting_audio_and materials posted Devoxx UK and Devoxx France CFP open JPA 2.1 Schema Generation WebSocket, Java EE 7, and GlassFish Events Dec 3-5, jDays, Göteborg, Sweden Dec 4-6, JavaOne Latin America, Sao Paolo, Brazil Dec 14-15, IndicThreads, Pune, India JCP Spec Lead Call December on Developing a TCK JCP EC Face to Face Meeting, January 15-16, West Coast USA Feature InterviewBruno Souza is a Java Developer and Open Source Evangelist at Summa Technologies, and a Cloud Expert at ToolsCloud. Nurturing developer communities is a personal passion, and Bruno worked actively with Java, NetBeans, Open Solaris, OFBiz, and many other open source communities. As founder and coordinator of SouJava (The Java Users Society), one of the world's largest Java User Groups, Bruno leaded the expansion of the Java movement in Brazil. Founder of the Worldwide Java User Groups Community, Bruno helped the creation and organization of hundreds of JUGs worldwide. A Java Developer since the early days, Bruno participated in some of the largest Java projects in Brazil.Fabiane Nardon is a computer scientist who is passionate about creating software that will positively change the world we live in. She was the architect of the Brazilian Healthcare Information System, considered the largest JavaEE application in the world and winner of the 2005 Duke's Choice Award. She leaded several communities, including the JavaTools Community at java.net, where 800+ open source projects were born. She is a frequent speaker at conferences in Brazil and abroad, including JavaOne, OSCON, Jfokus, JustJava and more. She’s also the author of several technical articles and member of the program committee of several conferences as JavaOne, OSCON, TDC. She was chosen a Java Champion by Sun Microsystems as a recognition of her contribution to the Java ecosystem. Currently, she works as a tools expert at ToolsCloud and in companies she co-founded, where she is helping to shape new disruptive Internet based services.StoryTroop is a space where we combine multiple perspectives about a story. This creates an understanding of that story like never seen before. Pieces of a story are organized in time and space and anyone can add a different perspective.What’s Cool Geek Bike Ride at JavaOne LAD Devoxx UK (Mar 26, 27) and FR (Mar 27 - 29) CFP jFokus schedule is firming up Nashorn Blog 1,500 @JavaSpotlight Twitter followers

    Read the article

  • Where are my sub templates?

    - by Tim Dexter
    This one is for standalone/BIEE uses of Publisher. All the ERP/CRM/HCM folks are already catered for and can tuck into a nut cutlet and arugala salad. Sorry, I have just watched Food Inc and even if only half of it is true; Im still on a crusade in my house against mass produced food. Wake up World! If you have ventured into the world of sub templating, you'll be reaping some development benefit. In terms of shared report components and calculations they are very useful. Just exporting all of your report headers and footers to a single sub template can potentially save you hours and hours of work and make you look like a star. If someone in management gets it into their head that they would like Comic San Serif font rather than Arial in their report headers, its a 10 min job rather than 100 hours! What about the rest of the report content? I hear you cry. Its coming in 11g, full master template support. Your management wants bright blue borders with yellow backgrounds for all the tables in your reports, 5 minute job! Getting back to sub templates and my comment about all the ERP/CRM/HCM folks be catered for. In the standalone release there is no out of the box directory for you to drop your sub templates. Dropping them into the main report directory would make sense but they are not accessible there via a URL. An oversight on our part and something that will be addressed in 11g. Sub templates are now a first class citizen in the world of BIP, you can upload them and BIP will know what to do with them. But what do you do right now? The easiest place to put them where BIP can 'see' them is to create a directory under the xmlpserver install directory in the J2EE container e.g. $J2EE_HOME/xmlpserver/xmlpserver/subtemplates You can call it whatever you want but when the server is started up, that directory is accessible via a URL i.e. http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf. You can therefore put it into the top of your main templates and call the sub template. <?import: http://tdexter:9704/xmlpserver/subtemplates/mysub.rtf?> Of course, you can drop them anywhere you want, they just need to be in a web server mountable directory. Enjoy the arugala!

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • how to resize an encrypted logical volume?

    - by Nirmik
    I installed Ubuntu with encryption and LVM on my entire haddisk... Now I want to resize it. How do I do This... Following this link gave me errors on step 2 - How to resize a LVM partition? error ubuntu@ubuntu:~$ sudo e2fsck -f /dev/sda5 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block e2fsck: Superblock invalid, trying backup blocks... e2fsck: Bad magic number in super-block while trying to open /dev/sda5 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 what do I do?

    Read the article

  • Web Developer or Software Engineer?

    - by Deepesh
    A question that I have been asking myself and really confused which path to take. So I need your guys help as to the pros and cons of these 2 professions in today's world. I love web applications development as the Web is the best thing to happen in this age and nearly everyone gets by on the World Wide Web. And also tend to keep learning about new technologies and about web services. On the other hand I like software engineering also for the desktop applications as I have had experience with development small scale softwares in VB.Net, Java, C++, etc. Which path has more scope and better future? Whats your view?

    Read the article

  • I'm Seeing Red

    - by Grant Fritchey
    Hello World! My move into the world of Red Gate is more and more complete with my shiny, new, red, blog. The goal of this blog is not to compete with, or replace, my blog over at ScaryDBA. Instead, this blog is where I can share things I find about Red Gate products and services. I can talk about the things that we're doing at Red Gate. I can talk about the things I'm doing at Red Gate. In short, this is my Red Gate blog. I'm still the Scary DBA, but over here, I'm painted bright red (and no, I was promised that no pictures were taken of that process). So look for tips and suggestions about Red Gate products, methods to help you do your job better using one of our tools, and anything else I can think of or comment on that supports you and our excellent software.

    Read the article

  • xsltproc killed, out of memory

    - by David Parks
    I'm trying to split up a 13GB xml file into small ~50MB xml files with this XSLT style sheet. But this process kills xsltproc after I see it taking up over 1.7GB of memory (that's the total on the system). Is there any way to deal with huge XML files with xsltproc? Can I change my style sheet? Or should I use a different processor? Or am I just S.O.L.? <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:exsl="http://exslt.org/common" extension-element-prefixes="exsl" xmlns:fn="http://www.w3.org/2005/xpath-functions"> <xsl:output method="xml" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:param name="block-size" select="75000"/> <xsl:template match="/"> <xsl:copy> <xsl:apply-templates select="mysqldump/database/table_data/row[position() mod $block-size = 1]" /> </xsl:copy> </xsl:template> <xsl:template match="row"> <exsl:document href="chunk-{position()}.xml"> <add> <xsl:for-each select=". | following-sibling::row[position() &lt; $block-size]" > <doc> <xsl:for-each select="field"> <field> <xsl:attribute name="name"><xsl:value-of select="./@name"/></xsl:attribute> <xsl:value-of select="."/> </field> <xsl:text>&#xa;</xsl:text> </xsl:for-each> </doc> </xsl:for-each> </add> </exsl:document> </xsl:template>

    Read the article

  • Can I safely delete the Ubuntu 12.04 partition and use the unallocated space for my Elementary OS?

    - by d4ryl3
    I have this setup: I've decided to switch to Elementary OS Luna (fork of Ubuntu 12.04) as my main Linux distro. Now I need to delete my Ubuntu partition so I could add capacity to my eOS which only has 10Gb. Currently my eOS is in /dev/sda9, and Ubuntu in /dev/sda8/. I forgot where my bootloader is installed, so I ran bootinfoscript, which returned this: `============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 94 for . sda1: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /boot/bcd sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 2048. Operating System: Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: sda7: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda7 and looks at sector 851823520 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Boot files: /grub/grub.cfg /extlinux/extlinux.conf sda8: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda8 and looks at sector 860224256 of the same hard drive for core.img. core.img is at this location and looks for (,msdos9)/boot/grub on this drive. Operating System: Ubuntu 13.04 Boot files: /etc/fstab sda9: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: elementary OS Luna Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img` I need advice as to how to proceed. I mean, could I simply delete /dev/sda7/ and /dev/sda8/? Please help, thank you.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Graffiti is a Sinatra-inspired Groovy Framework

    - by kerry
    Playing around with Sinatra the other day and realized I could really use something like this for Groovy. Thus, Graffiti was born. It’s basically a thin wrapper around Jetty. At first, I thought I might write my own server for it (everybody needs to do that once, don’t they?), but decided to invoke the ’simplest thing that could possibly work’ principle. Here is the requisite ‘Hello World’ example: import graffiti.* @Grab('com.goodercode:graffiti:1.0-SNAPSHOT') @Get('/helloworld') def hello() { 'Hello World' } Graffiti.serve this The code, plus more documentation is hosted under my github account.

    Read the article

  • Why do my 512x512 bitmaps look jaggy on Android OpenGL?

    - by Milo Mordaunt
    This is sort of driving me nuts, I've googled and googled and tried everything I can think of, but my sprites still look super blurry and super jaggy. Example: Here: https://docs.google.com/open?id=0Bx9Gbwnv9Hd2TmpiZkFycUNmRTA If you click through to the actual full size image you should see what I mean, it's like it's taking and average of every 5*5 pixels or something, the background looks really blurry and blocky, but the ball is the worst. The clouds look all right for some reason, probably because they're mostly transparent. I know the pngs aren't top notch themselves but hey, I'm no artist! I would imagine it's a problem with either: a. How the pngs are made example sprite (512x512): https://docs.google.com/open?id=0Bx9Gbwnv9Hd2a2RRQlJiQTFJUEE b. How my Matrices work This is the relevant parts of the renderer: public void onDrawFrame(GL10 unused) { if(world != null) { dt = System.currentTimeMillis() - endTime; world.update( (float) dt); // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mvMatrix, 0, 0f, 0f, 0f); world.draw(mvMatrix, mProjMatrix); endTime = System.currentTimeMillis(); } else { Log.d(TAG, "There is no world...."); } } public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); Matrix.orthoM(mProjMatrix, 0, 0, width /2, 0, height /2, -1.f, 1.f); } And this is what each Quad does when draw is called: public void draw(float[] mvMatrix, float[] pMatrix) { Matrix.setIdentityM(mMatrix, 0); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mMatrix, 0, xPos, yPos, 0.f); Matrix.multiplyMM(mvMatrix, 0, mvMatrix, 0, mMatrix, 0); Matrix.scaleM(mvMatrix, 0, scale, scale, 0f); Matrix.rotateM(mvMatrix, 0, angle, 0f, 0f, -1f); GLES20.glUseProgram(mProgram); posAttr = GLES20.glGetAttribLocation(mProgram, "vPosition"); texAttr = GLES20.glGetAttribLocation(mProgram, "aTexCo"); uSampler = GLES20.glGetUniformLocation(mProgram, "uSampler"); int alphaHandle = GLES20.glGetUniformLocation(mProgram, "alpha"); GLES20.glVertexAttribPointer(posAttr, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, vertexBuffer); GLES20.glVertexAttribPointer(texAttr, 2, GLES20.GL_FLOAT, false, 0, texCoBuffer); GLES20.glEnableVertexAttribArray(posAttr); GLES20.glEnableVertexAttribArray(texAttr); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture); GLES20.glUniform1i(uSampler, 0); GLES20.glUniform1f(alphaHandle, alpha); mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVMatrix"); mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix"); GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0); GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, pMatrix, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, 4, GLES20.GL_UNSIGNED_SHORT, indicesBuffer); GLES20.glDisableVertexAttribArray(posAttr); GLES20.glDisableVertexAttribArray(texAttr); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); } c. How my texture loading/blending/shaders setup works Here is the renderer setup: public void onSurfaceCreated(GL10 unused, EGLConfig config) { // Set the background frame color GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); GLES20.glDisable(GLES20.GL_DEPTH_TEST); GLES20.glDepthMask(false); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glEnable(GLES20.GL_DITHER); } Here is the vertex shader: attribute vec4 vPosition; attribute vec2 aTexCo; varying vec2 vTexCo; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main() { gl_Position = uPMatrix * uMVMatrix * vPosition; vTexCo = aTexCo; } And here's the fragment shader: precision mediump float; uniform sampler2D uSampler; uniform vec4 vColor; varying vec2 vTexCo; varying float alpha; void main() { vec4 color = texture2D(uSampler, vec2(vTexCo)); gl_FragColor = color; if(gl_FragColor.a == 0.0) { "discard; } } This is how textures are loaded: private int loadTexture(int rescource) { int[] texture = new int[1]; BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inScaled = false; Bitmap temp = BitmapFactory.decodeResource(context.getResources(), rescource, opts); GLES20.glGenTextures(1, texture, 0); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, temp, 0); GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); temp.recycle(); return texture[0]; } I'm sure I'm doing about 20,000 things wrong, so I'm really sorry if the problem is blindingly obvious... The test device is a Galaxy Note, running a JellyBean custom ROM, if that matters at all. So the screen resolution is 1280x800, which means... The background is 1024x1024, so yeah it might be a little blurry, but shouldn't be made of lego. Thank you so much, any answer at all would be appreciated.

    Read the article

  • Frame Interpolation issues for skeletal animation

    - by sebby_man
    I'm trying to animate in-between keyframes for skeletal animation but having some issues. Each joint is represented by a quaternion and there is no translation component. When I try to slerp between the orientations at the two key frames, I got a very wacky animation. I know my skinning equation is right because the animation is perfectly fine when the animation is directly on a keyframe rather than in-between two. I'm using glm's built in mix function to do the slerp, so I don't think there are any problems with the actual slerp implementation. There's really one thing left that could be wrong here. I must not be in the correct space to do slerp. Right now the orientations are in joint local space. Do I have to be in world space? In some other space along the way? I have the bind pose matrix and world-space transformation matrix at my disposal if those are needed.

    Read the article

  • Why do operating systems do low level stuff in C and C++? Why not just C++?

    - by Cole Johnson
    On the Wikipedia page for Windows, it states the Windows is written in Assembly for the bootloader and task switcher, and C and C++ for kernel routines. This confuses me because AFAIK, you can call C++ functions from an extern "C"'d block as C++ is just C with extra features (all of which can be rewritten in C if you wanted to AFAIK). I can get using C for the kernel functions so pure C apps can use them (like printf and such), but if they can just be wrapped in an extern "C " block, then why code in C? So my question is: Why would a kernel be written in both C and C++ instead of just C++

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >