Search Results

Search found 639 results on 26 pages for 'dog ears'.

Page 20/26 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Speeding up procedural texture generation

    - by FalconNL
    Recently I've begun working on a game that takes place in a procedurally generated solar system. After a bit of a learning curve (having neither worked with Scala, OpenGL 2 ES or Libgdx before), I have a basic tech demo going where you spin around a single procedurally textured planet: The problem I'm running into is the performance of the texture generation. A quick overview of what I'm doing: a planet is a cube that has been deformed to a sphere. To each side, a n x n (e.g. 256 x 256) texture is applied, which are bundled in one 8n x n texture that is sent to the fragment shader. The last two spaces are not used, they're only there to make sure the width is a power of 2. The texture is currently generated on the CPU, using the updated 2012 version of the simplex noise algorithm linked to in the paper 'Simplex noise demystified'. The scene I'm using to test the algorithm contains two spheres: the planet and the background. Both use a greyscale texture consisting of six octaves of 3D simplex noise, so for example if we choose 128x128 as the texture size there are 128 x 128 x 6 x 2 x 6 = about 1.2 million calls to the noise function. The closest you will get to the planet is about what's shown in the screenshot and since the game's target resolution is 1280x720 that means I'd prefer to use 512x512 textures. Combine that with the fact the actual textures will of course be more complicated than basic noise (There will be a day and night texture, blended in the fragment shader based on sunlight, and a specular mask. I need noise for continents, terrain color variation, clouds, city lights, etc.) and we're looking at something like 512 x 512 x 6 x 3 x 15 = 70 million noise calls for the planet alone. In the final game, there will be activities when traveling between planets, so a wait of 5 or 10 seconds, possibly 20, would be acceptable since I can calculate the texture in the background while traveling, though obviously the faster the better. Getting back to our test scene, performance on my PC isn't too terrible, though still too slow considering the final result is going to be about 60 times worse: 128x128 : 0.1s 256x256 : 0.4s 512x512 : 1.7s This is after I moved all performance-critical code to Java, since trying to do so in Scala was a lot worse. Running this on my phone (a Samsung Galaxy S3), however, produces a more problematic result: 128x128 : 2s 256x256 : 7s 512x512 : 29s Already far too long, and that's not even factoring in the fact that it'll be minutes instead of seconds in the final version. Clearly something needs to be done. Personally, I see a few potential avenues, though I'm not particularly keen on any of them yet: Don't precalculate the textures, but let the fragment shader calculate everything. Probably not feasible, because at one point I had the background as a fullscreen quad with a pixel shader and I got about 1 fps on my phone. Use the GPU to render the texture once, store it and use the stored texture from then on. Upside: might be faster than doing it on the CPU since the GPU is supposed to be faster at floating point calculations. Downside: effects that cannot (easily) be expressed as functions of simplex noise (e.g. gas planet vortices, moon craters, etc.) are a lot more difficult to code in GLSL than in Scala/Java. Calculate a large amount of noise textures and ship them with the application. I'd like to avoid this if at all possible. Lower the resolution. Buys me a 4x performance gain, which isn't really enough plus I lose a lot of quality. Find a faster noise algorithm. If anyone has one I'm all ears, but simplex is already supposed to be faster than perlin. Adopt a pixel art style, allowing for lower resolution textures and fewer noise octaves. While I originally envisioned the game in this style, I've come to prefer the realistic approach. I'm doing something wrong and the performance should already be one or two orders of magnitude better. If this is the case, please let me know. If anyone has any suggestions, tips, workarounds, or other comments regarding this problem I'd love to hear them.

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • FileNotFoundException, altough the XML file should be deployed

    - by Bernhard V
    Hi, I've got problems starting my WAR application on a local JBoss. After two other EARs are deployed and the TomcatDeployer begins deploying the WAR, I'm getting the following error message: 2010-04-28 10:01:56,605 ERROR [org.jboss.ejb.plugins.LogInterceptor] [] [main] EJBException in method: public abstract at.sozvers.stp.zpv.ejb.lea.rwsuc.EJBLeaRegelwerkSuchenRemote at.sozvers.stp.zpv.ejb.lea.rwsuc.EJBLeaRegelwerkSuchenHome.create() throws javax.ejb.CreateException,java.rmi.RemoteException, causedBy: javax.ejb.EJBException: org.springframework.beans.factory.access.BootstrapException: Unable to initialize group definition. Group resource name [classpath*:applicationContext.xml], factory key [contextService]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'contextService' defined in URL [jar:file:/C:/ta30/nutzb/jboss-4.2.3.GA.ZPV/server/default/deploy/deploy.last/zpv-app-web-frontend-1.0-SNAPSHOT.war/WEB-INF/lib/zpv-comp-ejb-modules-1.0-SNAPSHOT-client.jar!/applicationContext.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.context.support.ClassPathXmlApplicationContext]: Constructor threw exception; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [at/sozvers/stp/zpv/dao/ContextBasic.xml]; nested exception is java.io.FileNotFoundException: class path resource [at/sozvers/stp/zpv/dao/ContextBasic.xml] cannot be opened because it does not exist The sad thing is that the resource at/sozvers/stp/zpv/dao/ContextBasic.xml actually is placed in a JAR in one of my EAR files which should be deployed before the WAR. And at least I get a message that the deployment of the EAR has been successful. I also looked into the JAR with my file archiver and the ContextBasic.xml is indeed there at the right place. Is there a way for me to get sure that the JAR, not the EAR as a whole, is really deployed to the JBoss? I'm already starting to lose my head about this issue. Thank you. Bernhard

    Read the article

  • Deploy multiple instances of an EAR (representing versions) to Glassfish

    - by Thorbjørn Ravn Andersen
    I basically want to be able to deploy multiple versions of the same EAR file to the same server (Glassfish instance?) , and have a unique path to each version separating them. From my reading on this it appears that multiple EARs deploy to the root of the web server namespace so that they can coexist if they do not have colliding context-root's of WAR's. In my case I'd rather have that instead of everything going under "/", I'd like to be able to brand a given EAR-file build to ALWAYS deploy under a given path like "/foo-20100319" or "/foo-CUSTOMER-20010101". This can easily be done with a single WAR file just by renaming it. I do not need or want them to disturb each other. It is my understanding that this remapping is outside the scope of the application.xml file, so I found that http://docs.sun.com/app/docs/doc/820-7693/beayr?a=view says that I can specify web-uri and context-root, but I am not certain that what I wish to do, can be specified with these in Glassfish. How should I approach this? I have full control over the build process. (I have found http://stackoverflow.com/questions/877390/deploying-multiple-java-web-apps-to-glassfish-in-one-go but I am not certain how to apply this to what I need).

    Read the article

  • Qmake does not specify a valid qt

    - by Comptrol
    After installing Qt SDK for Open Source C++ development on Mac OS by following the respective steps Note for the binary package: If you have the binary package, simply double-click on the Qt.mpkg and follow the instructions to install Qt. . Yes, that is all I have done to install Qt on MacOsX. Everything was going fine, until I run a sample application, of which compile output resulted in: No valid Qt version set. Set one in Preferences Error while building project qtilk When executing build step 'QMake' Canceled build. Then I tried to change the respective Qt version in Preferences and I hovered over the Path, I realized my mkspec isn't set: Then I tried querying qmake by qmake -query : QT_INSTALL_PREFIX:/ QT_INSTALL_DATA:/usr/local/Qt4.6 QT_INSTALL_DOCS:/Developer/Documentation/Qt QT_INSTALL_HEADERS:/usr/include QT_INSTALL_LIBS:/Library/Frameworks QT_INSTALL_BINS:/Developer/Tools/Qt QT_INSTALL_PLUGINS:/Developer/Applications/Qt/plugins QT_INSTALL_TRANSLATIONS:/Developer/Applications/Qt/translations QT_INSTALL_CONFIGURATION:/Library/Preferences/Qt QT_INSTALL_EXAMPLES:/Developer/Examples/Qt/ QT_INSTALL_DEMOS:/Developer/Examples/Qt/Demos QMAKE_MKSPECS:/usr/local/Qt4.6/mkspecs QMAKE_VERSION:2.01a QT_VERSION:4.6.2 QMAKE_MKSPECS seems to be set here?? Will setting my mkspec solve my building problem? I tried setting by typing export mkspec=macx-g++. Still, mkspec seems not to be set to anything. I am all ears waiting for your answers. Thanks in advance.

    Read the article

  • FileNotFoundException, although the XML file should be deployed

    - by Bernhard V
    Hi, I've got problems starting my WAR application on a local JBoss. After two other EARs are deployed and the TomcatDeployer begins deploying the WAR, I'm getting the following error message: 2010-04-28 10:01:56,605 ERROR [org.jboss.ejb.plugins.LogInterceptor] [] [main] EJBException in method: public abstract at.sozvers.stp.zpv.ejb.lea.rwsuc.EJBLeaRegelwerkSuchenRemote at.sozvers.stp.zpv.ejb.lea.rwsuc.EJBLeaRegelwerkSuchenHome.create() throws javax.ejb.CreateException,java.rmi.RemoteException, causedBy: javax.ejb.EJBException: org.springframework.beans.factory.access.BootstrapException: Unable to initialize group definition. Group resource name [classpath*:applicationContext.xml], factory key [contextService]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'contextService' defined in URL [jar:file:/C:/ta30/nutzb/jboss-4.2.3.GA.ZPV/server/default/deploy/deploy.last/zpv-app-web-frontend-1.0-SNAPSHOT.war/WEB-INF/lib/zpv-comp-ejb-modules-1.0-SNAPSHOT-client.jar!/applicationContext.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.springframework.context.support.ClassPathXmlApplicationContext]: Constructor threw exception; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from class path resource [at/sozvers/stp/zpv/dao/ContextBasic.xml]; nested exception is java.io.FileNotFoundException: class path resource [at/sozvers/stp/zpv/dao/ContextBasic.xml] cannot be opened because it does not exist The sad thing is that the resource at/sozvers/stp/zpv/dao/ContextBasic.xml actually is placed in a JAR in one of my EAR files which should be deployed before the WAR. And at least I get a message that the deployment of the EAR has been successful. I also looked into the JAR with my file archiver and the ContextBasic.xml is indeed there at the right place. Is there a way for me to get sure that the JAR, not the EAR as a whole, is really deployed to the JBoss? I'm already starting to lose my head about this issue. Thank you. Bernhard

    Read the article

  • Eclipse (Aptana) Typing Lag

    - by Zack
    Hello SO, I've been using Aptana for some time now, and as of recent I've been dealing with files that are really, really big (500+ lines of code, which is huge for me, being a novice developer). Whenever I deal with smaller files, I get that weird sensation that I'm "in front of" what's typing, but now I'm quite sure of it--there is a significant lag between when I type something and when I see the text appear on screen. I don't have this issue with Dreamweaver CS3, so I know my computer has the capability to edit these files without this happening, but Eclipse still lags. I also don't see when something is being deleted if I hold down backspace, I see the first few characters get deleted, but then everything "hangs." Once I release the backspace key, the characters that would've been shown deleting instantly vanish all at once. The same thing happens with the forward delete key. I'm beginning to think this is an issue with Java, since I have the same feeling that everything is slightly "behind me" when I'm using -any- Java application. The computer is an intel Pentium 4 3.2 GHz Prescott, with 2GB's of DDR400 RAM and a Radeon HD3650 graphics card. If anyone knows how to fix this lagging issue, I'm all ears (eyes?); if anyone can recommend a different IDE with capabilities similar to Aptana (I do Python, HTML, CSS and JS; I use Git for SCM), I'd be glad to give it a try. Thanks!

    Read the article

  • Google App Engine - Uploading blobs and authentication

    - by Keyur
    (I tried asking this on the GAE forums but didn't get an answer so am trying it here.) Currently to upload blobs, the app engine's blob store service creates a unique one- time URL that a user can post blobs to. My requirement is that I only want authenticated / authorized users to post blobs in my application. I can achieve this currently if the page that includes the multipart form to upload blobs is in my application. However, I am looking to providing a "REST API" for my users to upload their blobs. While it is true that the one-time nature of the upload URL mitigates the chances of rogue use but it's still possible. I was wondering if there is anyone on the app engine team here that can consider a feature where developers can register an upload listener. (Or if there is already a way, I'll be all ears). A standard servlet filter could also potentially do the job. This will give us an opportunity to authenticate / validate / decorate requests before the request gets forwarded to the blob store service. Thanks, Keyur

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • Help making userscript work in chrome

    - by Vishal Shah
    I've written a userscript for Gmail Pimp.my.Gmail & i'd like it to be compatible with Google Chrome too. Now i have tried a couple of things, to the best of my Javascript knowledge (which is very weak) & have been successful up-to a certain extent, though im not sure if it's the right way. Here's what i tried, to make it work in Chrome: The very first thing i found is that contentWindow.document doesn't work in chrome, so i tried contentDocument, which works. BUT i noticed one thing, checking the console messages in Firefox and Chrome, i saw that the script gets executed multiple times in Firefox whereas in Chrome it just executes once! So i had to abandon the window.addEventListener('load', init, false); line and replace it with window.setTimeout(init, 5000); and i'm not sure if this is a good idea. The other thing i tried is keeping the window.addEventListener('load', init, false); line and using window.setTimeout(init, 1000); inside init() in case the canvasframe is not found. So please do lemme know what would be the best way to make this script cross-browser compatible. Oh and im all ears for making this script better/efficient code wise (which is sure there is)

    Read the article

  • Vertical Image Border Changes Alignment when Screen is not maximized

    - by Randy
    I have some pretty straightforward HTML code with a few tables to organize various items of text or images. All works fine except that I need to place a vertical border on both the left and right sides of the screen. I am able to do this with a 2x2 pixel image that I stretch out. When the user has their screen maximized, everything looks great. But when the user hits "Restore Down", then the borders stay in place, but the tables get shoved down so that they start below where the borders end, which is off screen. in other words, the relational alignment between the borders and the tables gets all screwed up. Does anybody know how to make this alignment stay consistent on a restore down? I'm pretty much a newbie with html and asp, so speak slowly. If there is a better method to accomplish this, I'm all ears. Thanks. Here is the relevant section of code: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="MasterPage.master.cs" Inherits="MasterPage" % <form id="form1" runat="server"> <asp:Image ID="LeftBorder" src="../Images/Border_Blue.jpg" runat="server" WIDTH="15" HEIGHT="1000" BORDER="0" alt="Image Missing" align="left"/> <asp:Image ID="RightBorder" src="../Images/Border_Blue.jpg" runat="server" WIDTH="15" HEIGHT="1000" BORDER="0" alt="Image Missing" align="right" /> <table id="BannerTable" style="height: 100px"> <tr> <td width="934px"> <img src="../Images/Header.jpg" alt="Image Missing" id="ImgBanner" align="left"/></td> </tr> </table>

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Losing URI segments when paginating with CodeIgniter

    - by Danny Herran
    I have a /payments interface where the user should be able to filter via price range, bank, and other stuff. Those filters are standard select boxes. When I submit the filter form, all the post data goes to another method called payments/search. That method performs the validation, saves the post values into a session flashdata and redirects the user back to /payments passing the flashdata name via URL. So my standard pagination links with no filters are exactly like this: payments/index/20/ payments/index/40/ payments/index/60/ And if you submit the filter form, the returning URL is: payments/index/0/b48c7cbd5489129a337b0a24f830fd93 This works just great. If I change the zero for something else, it paginates just fine. The only issue however is that the << 1 2 3 4 page links wont keep the hash after the pagination offset. CodeIgniter is generating the page links ignoring that additional uri segment. My uri_segment config is already set to 3: $config['uri_segment'] = 3; I cannot set the page offset to 4 because that hash may or may not exists. Any ideas of how can I solve this? Is it mandatory for CI to have the offset as the last segment in the uri? Maybe I am trying an incorrect approach, so I am all ears. Thank you folks.

    Read the article

  • window.setInterval from inside an object

    - by Keith Rousseau
    I'm currently having an issue where I have a javascript object that is trying to use setInterval to call a private function inside of itself. However, it can't find the object when I try to call it. I have a feeling that it's because window.setInterval is trying to call into the object from outside but doesn't have a reference to the object. FWIW - I can't get it to work with the function being public either. The basic requirement is that I may need to have multiple instances of this object to track multiple uploads that are occurring at once. If you have a better design than the current one or can get the current one working then I'm all ears. The following code is meant to continuously ping a web service to get the status of my file upload: var FileUploader = function(uploadKey) { var intervalId; var UpdateProgress = function() { $.get('someWebService', {}, function(json) { alert('success'); }); }; return { BeginTrackProgress: function() { intervalId = window.setInterval('UpdateProgress()', 1500); }, EndTrackProgress: function() { clearInterval(intervalId); } }; }; This is how it is being called: var fileUploader = new FileUploader('myFileKey'); fileUploader.BeginTrackProgress();

    Read the article

  • Assistance with Lua functions

    - by Josh
    As noted before, I'm relatively new to lua, but again, I learn quick. The last time I got help here, it helped me immensely, and I was able to write a better script. Now I've come to another question that I think will make my life a bit easier. I have no clue what I'm doing with functions, but I'm hoping there is a way to do what I want to do here. Below, you'll see an example of code I have to do to strip down some unneeded elements. Yeah, I realize it's not efficient in the least, so if anyone else has a better idea of how to make it much more efficient, I'm all ears. What I would like to do is create a function with it so that I can strip down whatever variable with a simple call of it (like stripdown(winds)). I appreciate any help that is offered, and any lessons given. Thanks! winds = string.gsub(winds,"%b<>","") winds = string.gsub(winds,"%c"," ") winds = string.gsub(winds," "," ") winds = string.gsub(winds," "," ") winds = string.gsub(winds,"^%s*(.-)%s*$", "%1)") winds = string.gsub(winds,"&nbsp;","") winds = string.gsub(winds,"/ ", "(") Josh

    Read the article

  • How to get a template tag to auto-check a checkbox in Django

    - by Daniel Quinn
    I'm using a ModelForm class to generate a bunch of checkboxes for a ManyToManyField but I've run into one problem: while the default behaviour automatically checks the appropriate boxes (when I'm editing an object), I can't figure out how to get that information in my own custom templatetag. Here's what I've got in my model: ... from django.forms import CheckboxSelectMultiple, ModelMultipleChoiceField interests = ModelMultipleChoiceField(widget=CheckboxSelectMultiple(), queryset=Interest.objects.all(), required=False) ... And here's my templatetag: @register.filter def alignboxes(boxes, cls): """ Details on how this works can be found here: http://docs.djangoproject.com/en/1.1/howto/custom-template-tags/ """ r = "" i = 0 for box in boxes.field.choices.queryset: r += "<label for=\"id_%s_%d\" class=\"%s\"><input type=\"checkbox\" name=\"%s\" value=\"%s\" id=\"id_%s_%d\" /> %s</label>\n" % ( boxes.name, i, cls, boxes.name, box.id, boxes.name, i, box.name ) i = i + 1 return mark_safe(r) The thing is, I'm only doing this so I can wrap some simpler markup around these boxes, so if someone knows how to make that happen in an easier way, I'm all ears. I'd be happy with knowing a way to access whether or not a box should be checked though.

    Read the article

  • How do you verify that 2 copies of a VB 6 executable came from the same code base?

    - by Tim Visher
    I have a program under version control that has gone through multiple releases. A situation came up today where someone had somehow managed to point to an old copy of the program and thus was encountering bugs that have since been fixed. I'd like to go back and just delete all the old copies of the program (keeping them around is a company policy that dates from before version control was common and should no longer be necessary) but I need a way of verifying that I can generate the exact same executable that is better than saying "The old one came out of this commit so this one should be the same." My initial thought was to simply MD5 hash the executable, store the hash file in source control, and be done with it but I've come up against a problem which I can't even parse. It seems that every time the executable is generated (method: Open Project. File Make X.exe) it hashes differently. I've noticed that Visual Basic messes with files every time the project is opened in seemingly random ways but I didn't think that would make it into the executable, nor do I have any evidence that that is indeed what's happening. To try to guard against that I tried generating the executable multiple times within the same IDE session and checking the hashes but they continued to be different every time. So that's: Generate Executable Generate MD5 Checksum: md5sum X.exe > X.md5 Verify MD5 for current executable: md5sum -c X.md5 Generate New Executable Verify MD5 for new executable: md5sum -c X.md5 Fail verification because computed checksum doesn't match. I'm not understanding something about either MD5 or the way VB 6 is generating the executable but I'm also not married to the idea of using MD5. If there is a better way to verify that two executables are indeed the same then I'm all ears. Thanks in advance for your help!

    Read the article

  • Problems installing PIL after OSX 10.9

    - by user2632417
    I installed Mac OSX 10.9 the day it came out. Afterwards I decided I needed to install PIL. I'd installed it before, but it appeared the update had broken that. When I try to use pip to install PIL, it fails when building _imaging. It appears the root cause is this. /usr/include/sys/cdefs.h:655:2: error: Unsupported architecture Theres also a similar error here: /usr/include/machine/limits.h:8:2: error: architecture not supported and here: /usr/include/machine/_types.h:34:2: error: architecture not supported Then there's a whole list of missing types. /usr/include/sys/_types.h:94:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /usr/include/sys/_types.h:95:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ /usr/include/sys/_types.h:96:9: error: unknown type name '__int32_t' typedef __int32_t __darwin_dev_t; /* dev_t */ ^ /usr/include/sys/_types.h:99:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ /usr/include/sys/_types.h:100:9: error: unknown type name '__uint32_t' typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ /usr/include/sys/_types.h:101:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ Needless to say I don't know where to go from here. I've got a couple of guesses, but I don't even know how to check. Wrong include probably as a result of a badly configured environment variable Problem with Xcode's installation/ missing command line tools Messed up header files If anyone has any suggestions either to check one of those possibilities or for one of their own I'm all ears.

    Read the article

  • Kohana 3.2 - Database Session losing data on new Page Request

    - by reado
    I've setup my dev Kohana server to use an encrypted database as the default Session type. I'm also using this in combination with Auth to implement user authentication. Right now my user's are able to authenticate correctly and the authentication keys are being stored in the session. I'm also storing additional data like the user's firstname and businessname during the login procedure. When my login function is ready to redirect the user to the user dashboard, I'm able to see all the data correctly when I do $session::instance()->as_array(); (Array ( [auth_user] => NRyk6lA8 [businessname] => Dudetown [firstname] => Matt )) As soon as I redirect the user to another page, $session::instance()->as_array(); is empty. By dumping out the Session::instance() object, I can see that the Session id's are still the same. When I look at my database table though, i dont see any session records being saved and my session table is empty. My bootstrap.php contains: Session::$default = 'database'; Cookie::$salt = 'asdfasdf'; Cookie::$expiration = 1209600; Cookie::$domain = FALSE; and my session.php config file looks like: return array( 'database' => array( 'name' => 'auth_user', 'encrypted' => TRUE, 'lifetime' => 24 * 3600, 'group' => 'default', 'table' => 'sessions', 'columns' => array( 'session_id' => 'session_id', 'last_active' => 'last_active', 'contents' => 'contents' ), 'gc' => 500, ), ); I've looked high and low for an answer.. if anyone has any suggestions, i'm all ears! Thanks!

    Read the article

  • Complex orderby question (entity framework)

    - by PFranchise
    Ok, so I will start by saying that I am new to all this stuff, and doing my best to work on this project. I have an employee object, that contains a supervisor field. When someone enters a search on my page, a datagrid displays employees whose name match the search. But, I need it to display all employees that report to them and a third tier of employees that report to the original employee's underlings. I only need three tiers. To make this easier, employees only come in 3 ranks, so if rank==3, that employee is not in charge of others. I imagine the best method of retrieving all these employees from my employee table would be something like from employee in context.employees where employee.name == search || employee.boss.name == search || employee.boss.boss.name == search But I am not sure how to make the orderby appear the way I want to. I need it to display in tiers. So, it will look like: Big Boss Boss underling underling Boss underling Boss Boss Big Boss Like I said, there might be an easier way to approach this whole issue, and if there is, I am all ears. Any advice you can give would be HIGHLY appreciated.

    Read the article

  • Echoing variable construct instead of content

    - by danp
    I'm merging two different versions of a translations array. The more recent version has a lot of changes, over several thousand lines of code. To do this, I loaded and evaluated the new file (which uses the same structure and key names), then loaded and evaluated the older version, overwriting the new untranslated values in the array with the values we already have translated. So far so good! However, I want to be able to echo out the constructor for this new merged array so I can cut and paste it into the new translation file, and have job done (apart from completing the rest of the translations..). The code looks like this (lots of different keys, not just index): $lang["index"]["chart1_label1"] = "Subscribed"; $lang["index"]["chart1_label2"] = "Unsubscribed"; And the old.. $lang["index"]["chart1_label1"] = "Subscrito"; $lang["index"]["chart1_label2"] = "Não subscrito"; After loading the two files, I end up with a merged $lang array, which I then want to echo out in the same form, so it can be used by the project. However, when I do something like this.. foreach ($lang as $key => $value) { if (is_array($value)) { foreach ($value as $key2 => $value2) { echo "$lang['".$key."']"; // ... etc etc } } } ..obviously I get "ArrayIndex" etc etc instead of "$lang". How to echo out $lang without it being evaluated..? Once this is working, can add in the rest of the brackets etc (I realise they are missing), but just want to make this part work first. If there's a better way to do this, all ears too! Thanks.

    Read the article

  • Facebook call back function with timeout

    - by Dirty Bird Design
    So I've been getting my ass kicked pretty good with Facebook's moving target of an API. I need to display some hidden content after a person clicks 'like' on a landing page. I can somewhat get this to work, when the user clicks 'like' the normal fb dialogue appears and then goes away immediately and content is displayed. I have 'achieved' this with the following js. <script> FB.Event.subscribe('edge.create', function(href, widget) { document.getElementById('goodies').style.display = "block"; document.getElementById('fb-content').style.display= "block"; document.getElementById('copy').style.display = "none"; }); </script> I cannot find any documentation about a callback event after someone hits "post to facebook" or after the dialogue closes, only afte they hit like. How would I incorporate a setTimeout function into this to give people some time to fill out the fb dialogue? thanks. If anyone has a better way to do this I'm all ears. This is for a business page and I cannot seem to add an app to get an app ID anymore so the API is pretty useless to me at this point. Also, if the url to be liked is a fb page, the callbacks don't seem to fire. Other code used: <html xmlns:fb="http://ogp.me/ns/fb#"> <div id="fb-root"></div> <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <fb:like href="onlynonfburl.com" send="false" layout="button_count" width="450" show_faces="false" font="arial"></fb:like>

    Read the article

  • Is CDS a valid analogy for pointers? [closed]

    - by Flinkman
    So.. bear with me. I just found an analogy to c++ pointers and CDS. This clip describes CDS(Credit Default Swaps). http://www.youtube.com/watch?v=KPNdYtrlgaU#t=120s "Here we know we have an instrument of a particular financial instrument that is demonstrably dangerous, it creates long chains of risk which are vulnerable to the failure of individual trader or market partipants, in that chain and these instruments in an affect permit the creation of vicious spirals. In which the CDS price interact with the bound price, the market price and you can have a downward spiral." What my ears are telling me: "Don't create dependences that will create long chains of crashing systems." Update: Trying to clarify with something that is closer to the readers. If I change the words: instrument = construct financial = language trader = object market partipants = c structs CDS price = uptime bound price = outcome market price = ROI(return on incestment) The quote become more understandable. Look: "Here we know we have construct of a particular language construct that is demonstrably dangerous, it creates long chains of risk which are vulnerable to the failure of individual object or structs in that chain and these system in an affect permit the creation of vicious spirals. In which the uptime interact with the outcome, the ROI and you can have a downward spiral."

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26  | Next Page >