Search Results

Search found 8056 results on 323 pages for 'external'.

Page 301/323 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Quadratic Programming with Oracle R Enterprise

    - by Jeff Taylor-Oracle
         I wanted to use quadprog with ORE on a server running Oracle Solaris 11.2 on a Oracle SPARC T-4 server For background, see: Oracle SPARC T4-2 http://docs.oracle.com/cd/E23075_01/ Oracle Solaris 11.2 http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.html quadprog: Functions to solve Quadratic Programming Problems http://cran.r-project.org/web/packages/quadprog/index.html Oracle R Enterprise 1.4 ("ORE") 1.4 http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html Problem: path to Solaris Studio doesn't match my installation: $ ORE CMD INSTALL quadprog_1.5-5.tar.gz * installing to library \u2018/u01/app/oracle/product/12.1.0/dbhome_1/R/library\u2019 * installing *source* package \u2018quadprog\u2019 ... ** package \u2018quadprog\u2019 successfully unpacked and MD5 sums checked ** libs /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95 -m64   -PIC  -g  -c aind.f -o aind.o bash: /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95: No such file or directory *** Error code 1 make: Fatal error: Command failed for target `aind.o' ERROR: compilation failed for package \u2018quadprog\u2019 * removing \u2018/u01/app/oracle/product/12.1.0/dbhome_1/R/library/quadprog\u2019 $ ls -l /opt/solarisstudio12.3/bin/f95 lrwxrwxrwx   1 root     root          15 Aug 19 17:36 /opt/solarisstudio12.3/bin/f95 -> ../prod/bin/f90 Solution: a symbolic link: $ sudo mkdir -p /opt/SunProd/studio12u3 $ sudo ln -s /opt/solarisstudio12.3 /opt/SunProd/studio12u3/ Now, it is all good: $ ORE CMD INSTALL quadprog_1.5-5.tar.gz * installing to library \u2018/u01/app/oracle/product/12.1.0/dbhome_1/R/library\u2019 * installing *source* package \u2018quadprog\u2019 ... ** package \u2018quadprog\u2019 successfully unpacked and MD5 sums checked ** libs /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95 -m64   -PIC  -g  -c aind.f -o aind.o /opt/SunProd/studio12u3/solarisstudio12.3/bin/ cc -xc99 -m64 -I/usr/lib/64/R/include -DNDEBUG -KPIC  -xlibmieee  -c init.c -o init.o /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95 -m64  -PIC -g  -c -o solve.QP.compact.o solve.QP.compact.f /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95 -m64  -PIC -g  -c -o solve.QP.o solve.QP.f /opt/SunProd/studio12u3/solarisstudio12.3/bin/f95 -m64   -PIC  -g  -c util.f -o util.o /opt/SunProd/studio12u3/solarisstudio12.3/bin/ cc -xc99 -m64 -G -o quadprog.so aind.o init.o solve.QP.compact.o solve.QP.o util.o -xlic_lib=sunperf -lsunmath -lifai -lsunimath -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfui -lfsu -lsunmath -lmtsk -lm -lifai -lsunimath -lfai -lfai2 -lfsumai -lfprodai -lfminlai -lfmaxlai -lfminvai -lfmaxvai -lfui -lfsu -lsunmath -lmtsk -lm -L/usr/lib/64/R/lib -lR installing to /u01/app/oracle/product/12.1.0/dbhome_1/R/library/quadprog/libs ** R ** preparing package for lazy loading ** help *** installing help indices   converting help for package \u2018quadprog\u2019     finding HTML links ... done     solve.QP                                html      solve.QP.compact                        html  ** building package indices ** testing if installed package can be loaded * DONE (quadprog) ====== Here is an example from http://cran.r-project.org/web/packages/quadprog/quadprog.pdf > require(quadprog) > Dmat <- matrix(0,3,3) > diag(Dmat) <- 1 > dvec <- c(0,5,0) > Amat <- matrix(c(-4,-3,0,2,1,0,0,-2,1),3,3) > bvec <- c(-8,2,0) > solve.QP(Dmat,dvec,Amat,bvec=bvec) $solution [1] 0.4761905 1.0476190 2.0952381 $value [1] -2.380952 $unconstrained.solution [1] 0 5 0 $iterations [1] 3 0 $Lagrangian [1] 0.0000000 0.2380952 2.0952381 $iact [1] 3 2 Here, the standard example is modified to work with Oracle R Enterprise require(ORE) ore.connect("my-name", "my-sid", "my-host", "my-pass", 1521) ore.doEval(   function () {     require(quadprog)   } ) ore.doEval(   function () {     Dmat <- matrix(0,3,3)     diag(Dmat) <- 1     dvec <- c(0,5,0)     Amat <- matrix(c(-4,-3,0,2,1,0,0,-2,1),3,3)     bvec <- c(-8,2,0)    solve.QP(Dmat,dvec,Amat,bvec=bvec)   } ) $solution [1] 0.4761905 1.0476190 2.0952381 $value [1] -2.380952 $unconstrained.solution [1] 0 5 0 $iterations [1] 3 0 $Lagrangian [1] 0.0000000 0.2380952 2.0952381 $iact [1] 3 2 Now I can combine the quadprog compute algorithms with the Oracle R Enterprise Database engine functionality: Scale to large datasets Access to tables, views, and external tables in the database, as well as those accessible through database links Use SQL query parallel execution Use in-database statistical and data mining functionality

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Visual View for Schema Based Editor

    - by Geertjan
    Starting from yesterday's blog entry, make the following change in the DataObject's constructor: registerEditor("text/x-sample+xml", true); I.e., the MultiDataObject.registerEditor method turns the editor into a multiview component. Now, again, within the DataObject, add the following, to register a source editor in the multiview component: @MultiViewElement.Registration(         displayName = "#LBL_Sample_Source",         mimeType = "text/x-sample+xml",         persistenceType = TopComponent.PERSISTENCE_NEVER,         preferredID = "ShipOrderSourceView",         position = 1000) @NbBundle.Messages({     "LBL_Sample_Source=Source" }) public static MultiViewElement createEditor(Lookup lkp){     return new MultiViewEditorElement(lkp); } Result: Next, let's create a visual editor in the multiview component. This could be within the same module as the above or within a completely separate module. That makes it possible for external contributors to provide modules with new editors in an existing multiview component: @MultiViewElement.Registration(displayName = "#LBL_Sample_Visual", mimeType = "text/x-sample+xml", persistenceType = TopComponent.PERSISTENCE_NEVER, preferredID = "VisualEditorComponent", position = 500) @NbBundle.Messages({ "LBL_Sample_Visual=Visual" }) public class VisualEditorComponent extends JPanel implements MultiViewElement {     public VisualEditorComponent() {         initComponents();     }     @Override     public String getName() {         return "VisualEditorComponent";     }     @Override     public JComponent getVisualRepresentation() {         return this;     }     @Override     public JComponent getToolbarRepresentation() {         return new JToolBar();     }     @Override     public Action[] getActions() {         return new Action[0];     }     @Override     public Lookup getLookup() {         return Lookup.EMPTY;     }     @Override     public void componentOpened() {     }     @Override     public void componentClosed() {     }     @Override     public void componentShowing() {     }     @Override     public void componentHidden() {     }     @Override     public void componentActivated() {     }     @Override     public void componentDeactivated() {     }     @Override     public UndoRedo getUndoRedo() {         return UndoRedo.NONE;     }     @Override     public void setMultiViewCallback(MultiViewElementCallback callback) {     }     @Override     public CloseOperationState canCloseElement() {         return CloseOperationState.STATE_OK;     } } Result: Next, the DataObject is automatically returned from the Lookup of DataObject. Therefore, you can go back to your visual editor, add a LookupListener, listen for DataObjects, parse the underlying XML file, and display values in GUI components within the visual editor.

    Read the article

  • ASP.NET: Including JavaScript libraries conditionally from CDN

    - by DigiMortal
    When developing cloud applications it is still useful to build them so they can run also on local machine without network connection. One thing you use from CDN when in cloud and from app folder when not connected are common JavaScript libraries. In this posting I will show you how to add support for local and CDN script stores to your ASP.NET MVC web application. Our solution is simple. We will add new configuration setting to our web.config file (including cloud transform file of it) and new property to our web application. In master page where scripts are included we will include scripts from CDN conditionally. There is nothing complex, all changes we make are simple ones. 1. Adding new property to web application Although I am using ASP.NET MVC web application these modifications work also very well with ASP.NET Forms. Open Global.asax and add new static property to your application class. public static bool UseCdn {     get     {         var valueString = ConfigurationManager.AppSettings["useCdn"];         bool useCdn;           bool.TryParse(valueString, out useCdn);         return useCdn;     } } If you want less round-trips to configuration data you can keep some nullable boolean in your application class scope and load CDN setting first time it is asked. 2. Adding new configuration setting to web.config By default my application uses local scripts. Although my application runs on cloud I can do a lot of stuff without staging environment in cloud. So by default I don’t have costs on traffic when including scripts from application folders. <appSettings>   <add key="UseCdn" value="false" /> </appSettings> You can also set UseCdn value to true and change it to false when you are not connected to network. 3. Modifying web.config cloud transform I have special configuration for my solution that I use when deploying my web application to cloud. This configuration is called Cloud and transform for this configuration is located in web.cloud.config. To make application using CDN when deployed to cloud we need the following transform. <appSettings>   <add key="UseCdn"        value="true"        xdt:Transform="SetAttributes"        xdt:Locator="Match(key)" /> </appSettings> Now when you publish your application to cloud it uses CDN by default. 4. Including scripts in master pages The last thing we need to change is our master page. My solution is simple. I check if I have to include scripts from CDN and if it is true then I include scripts from there. Otherwise my scripts will be included from application folder. @if (MyWeb.MvcApplication.UseCdn) {     <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js" type="text/javascript"></script> } else {     <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> } Although here is only one script shown you can add all your scripts that are also available in some CDN in this if-else block. You are free to include scripts from different CDN services if you need. Conclusion As we saw it was very easy to modify our application to make it use CDN for JavaScript libraries in cloud and local scripts when run on local machine. We made only small changes to our application code, configuration and master pages to get different script sources supported. Our application is now more independent from external sources when we are working on it.

    Read the article

  • Web optimization

    - by hmloo
    1. CSS Optimization Organize your CSS code Good CSS organization helps with future maintainability of the site, it helps you and your team member understand the CSS more quickly and jump to specific styles. Structure CSS code For small project, you can break your CSS code in separate blocks according to the structure of the page or page content. for example you can break your CSS document according the content of your web page(e.g. Header, Main Content, Footer) Structure CSS file For large project, you may feel having too much CSS code in one place, so it's the best to structure your CSS into more CSS files, and use a master style sheet to import these style sheets. this solution can not only organize style structure, but also reduce server request./*--------------Master style sheet--------------*/ @import "Reset.css"; @import "Structure.css"; @import "Typography.css"; @import "Forms.css"; Create index for your CSS Another important thing is to create index at the beginning of your CSS file, index can help you quickly understand the whole CSS structure./*---------------------------------------- 1. Header 2. Navigation 3. Main Content 4. Sidebar 5. Footer ------------------------------------------*/ Writing efficient CSS selectors keep in mind that browsers match CSS selectors from right to left and the order of efficiency for selectors 1. id (#myid) 2. class (.myclass) 3. tag (div, h1, p) 4. adjacent sibling (h1 + p) 5. child (ul > li) 6. descendent (li a) 7. universal (*) 8. attribute (a[rel="external"]) 9. pseudo-class and pseudo element (a:hover, li:first) the rightmost selector is called "key selector", so when you write your CSS code, you should choose more efficient key selector. Here are some best practice: Don't tag-qualify Never do this:div#myid div.myclass .myclass#myid IDs are unique, classes are more unique than a tag so they don't need a tag. Doing so makes the selector less efficient. Avoid overqualifying selectors for example#nav a is more efficient thanul#nav li a Don't repeat declarationExample: body {font-size:12px;}h1 {font-size:12px;font-weight:bold;} since h1 is already inherited from body, so you don't need to repeate atrribute. Using 0 instead of 0px Always using #selector { margin: 0; } There’s no need to include the px after 0, removing all those superfluous px can reduce the size of your CSS file. Group declaration Example: h1 { font-size: 16pt; } h1 { color: #fff; } h1 { font-family: Arial, sans-serif; } it’s much better to combine them:h1 { font-size: 16pt; color: #fff; font-family: Arial, sans-serif; } Group selectorsExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; } it would be much better if setup as:h1, h2 { color: #fff; font-family: Arial, sans-serif; } Group attributeExample: h1 { color: #fff; font-family: Arial, sans-serif; } h2 { color: #fff; font-family: Arial, sans-serif; font-size: 16pt; } you can set different rules for specific elements after setting a rule for a grouph1, h2 { color: #fff; font-family: Arial, sans-serif; } h2 { font-size: 16pt; } Using Shorthand PropertiesExample: #selector { margin-top: 8px; margin-right: 4px; margin-bottom: 8px; margin-left: 4px; }Better: #selector { margin: 8px 4px 8px 4px; }Best: #selector { margin: 8px 4px; } a good diagram illustrated how shorthand declarations are interpreted depending on how many values are specified for margin and padding property. instead of using:#selector { background-image: url(”logo.png”); background-position: top left; background-repeat: no-repeat; } is used:#selector { background: url(logo.png) no-repeat top left; } 2. Image Optimization Image Optimizer Image Optimizer is a free Visual Studio2010 extension that optimizes PNG, GIF and JPG file sizes without quality loss. It uses SmushIt and PunyPNG for the optimization. Just right click on any folder or images in Solution Explorer and choose optimize images, then it will automatically optimize all PNG, GIF and JPEG files in that folder. CSS Image Sprites CSS Image Sprites are a way to combine a collection of images to a single image, then use CSS background-position property to shift the visible area to show the required image, many images can take a long time to load and generates multiple server requests, so Image Sprite can reduce the number of server requests and improve site performance. You can use many online tools to generate your image sprite and CSS, and you can also try the Sprite and Image Optimization framework released by The ASP.NET team.

    Read the article

  • Openlayers - Redraw() layer. Add / Remove layer.

    - by Ozaki
    TLDR: I have an Openlayers map with a layer called 'track' I want to remove track and add track back in. I have an image 'imageFeature' on a layer that rotates on load to the direction being set. I want it to update this rotation that is set in 'styleMap' on a layer called 'tracking'. I set the var 'stylemap' to apply the external image & rotation. The 'imageFeature' is added to the layer at the coords specified. 'imageFeature' is removed. 'imageFeature' is added again in its new location. Rotation is not applied.. As the 'styleMap' applies to the layer I think that I have to remove the layer and add it again rather than just the 'imageFeature' Layer: var tracking = new OpenLayers.Layer.GML("Tracking", "coordinates.json", { format: OpenLayers.Format.GeoJSON, styleMap: styleMap }); styleMap: var styleMap = new OpenLayers.StyleMap({ fillOpacity: 1, pointRadius: 10, rotation: heading, }); Now wrapped in a timed function the imageFeature: map.layers[3].addFeatures(new OpenLayers.Feature.Vector( new OpenLayers.Geometry.Point(longitude, latitude), {rotation: heading, type: parseInt(Math.random() * 3)} )); Type refers to a lookup of 1 of 3 images.: styleMap.addUniqueValueRules("default", "type", lookup); var lookup = { 0: {externalGraphic: "Image1.png", rotation: heading}, 1: {externalGraphic: "Image2.png", rotation: heading}, 2: {externalGraphic: "Image3.png", rotation: heading} } I have tried the 'redraw()' function: but it returns "tracking is undefined" or "map.layers[2]" is undefined. tracking.redraw(true); map.layers[2].redraw(true); Heading is a variable: from a JSON feed. var heading = 13.542; But so far can't get anything to work it will only rotate the image onload. The image will move in coordinates as it should though. So what am I doing wrong with the redraw function or how can I get this image to rotate live? Thanks in advance -Ozaki Add: I managed to get map.layers[2].redraw(true); to sucessfully redraw layer 2. But it still does not update the rotation. I am thinking because the stylemap is updating. But it runs through the style map every n sec, but no updates to rotation and the variable for heading is updating correctly if i put a watch on it in firebug.

    Read the article

  • Android: OutofMemoryError: bitmap size exceeds VM budget with no reason I can see.

    - by Meymann
    Hi. I am having an OutOfMemory exception with a gallery over 600x800 pixels JPEG's. The environment I've been using Gallery with JPG images around 600x800 pixels. Since my content may be a bit more complex than just images, I have set each view to be a RelativeLayout that wraps ImageView with the JPG. In order to "speed up" the user experience I have a simple cache of 4 slots that prefetches (in a looper) about 1 image left and 1 image right to the displayed image and keeps them in a 4 slot HashMap. The platform I am using AVD of 256 RAM and 128 Heap Size, with a 600x800 screen. It also happens on an Entourage Edge target, except that with the device it's harder to debug. The problem I have been getting an exception: OutofMemoryError: bitmap size exceeds VM budget And it happens when fetching the fifth image. I have tried to change the size of my image cache, and it is still the same. The strange thing: There should not be a memory problem In order to make sure the heap limit is very far away from what I need, I have defined a dummy 8MB array in the beginning, and left it unreferenced so it's immediately dispatched. It is a member of the activity thread and is defined as following static { @SuppressWarnings("unused") byte dummy[] = new byte[ 8*1024*1024 ]; } The result is that the heap size is nearly 11MB and it's all free. Note I have added that trick after it began to crash. It makes OutOfMemory less frequent. Now, I am using DDMS. Just before the crash (does not change much after the crash), DDMS shows: ID Heap Size Allocated Free %Used #Objects 1 11.195 MB 2.428 MB 8.767 MB 21.69% 47,156 And in the detail table it shows: Type Count Total Size Smallest Largest Median Average free 1,536 8.739MB 16B 7.750MB 24B 5.825KB The largest block is 7.7MB. And yet the LogCat says: ERROR/dalvikvm-heap(1923): 925200-byte external allocation too large for this process. If you mind the relation of the median and the average, it is plausible to assume that most of the available blocks are very small. However, there is a block large enough for the bitmap, it's 7.7M. How come it is still not enough? Note: I recorded a heap trace. When looking at the amount of data allocated, it does not feel like more than 2M is allocated. It does match the free memory report by DDMS. Could it be that I experience some problem like heap-fragmentation? How do I solve/workaround the problem? Is the heap shared to all threads? Could it be that I interpret the DDMS readout in a wrong way, and there is really no 900K block to allocate? If so, can anybody please tell me where I can see that? Thanks a lot Meymann

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • Issues with MIPS interrupt for tv remote simulator

    - by pred2040
    Hello I am writing a program for class to simulate a tv remote in a MIPS/SPIM enviroment. The functions of the program itself are unimportant as they worked fine before the interrupt so I left them all out. The gaol is basically to get a input from the keyboard by means of interupt, store it in $s7 and process it. The interrupt is causing my program to repeatedly spam the errors: Exception occurred at PC=0x00400068 Bad address in data/stack read: 0x00000004 Exception occurred at PC=0x00400358 Bad address in data/stack read: 0x00000000 program starts here .data msg_tvworking: .asciiz "tv is working\n" msg_sec: .asciiz "sec -- " msg_on: .asciiz "Power On" msg_off: .asciiz "Power Off" msg_channel: .asciiz " Channel " msg_volume: .asciiz " Volume " msg_sleep: .asciiz " Sleep Timer: " msg_dash: .asciiz "-\n" msg_newline: .asciiz "\n" msg_comma: .asciiz ", " array1: .space 400 # 400 bytes of storage for 100 channels array2: .space 400 # copy of above for sorting var1: .word 0 # 1 if 0-9 is pressed, 0 if not var2: .word 0 # stores number of channel (ex. 2-) var3: .word 0 # channel timer var4: .word 0 # 1 if s pressed once, 2 if twice, 0 if not var5: .word 0 # sleep wait timer var6: .word 0 # program timer var9: .float 0.01 # for channel timings .kdata var7: .word 10 var8: .word 11 .text .globl main main: li $s0, 300 li $s1, 0 # channel li $s2, 50 # volume li $s3, 1 # power - 1:on 0:off li $s4, 0 # sleep timer - 0:off li $s5, 0 # temporary li $s6, 0 # length of sleep period li $s7, 10000 # current key press li $t2, 0 # temp value not needed across calls li $t4, 0 interrupt data here mfc0 $a0, $12 ori $a0, 0xff11 mtc0 $a0, $12 lui $t0, 0xFFFF ori $a0, $0, 2 sw $a0, 0($t0) mainloop: # 1. get external input, and process it # input from interupt is taken from $a2 and placed in $s7 #for processing beq $a2, $0, next lw $s7, 4($a2) li $a2, 0 # call the process_input function here # jal process_input next: # 2. check sleep timer mainloopnext1: # 3. delay for 10ms jal delay_10ms jal check_timers jal channel_time # 4. print status lw $s5, var6 addi $s5, $s5, 1 sw $s5, var6 addi $s0, $s0, -1 bne $s0, $0, mainloopnext4 li $s0, 300 jal status_print mainloopnext4: j mainloop li $v0,10 # exit syscall -------------------------------------------------- status_print: seconds_stat: power_stat: on_stat: off_stat: channel_stat: volume_stat: sleep_stat: j $ra -------------------------------------------------- delay_10ms: li $t0, 6000 delay_10ms_loop: addi $t0, $t0, -1 bne $t0, $0, delay_10ms_loop jr $ra -------------------------------------------------- check_timers: channel_press: sleep_press: go_back_press: channel_check: channel_ignore: sleep_check: sleep_ignore: j $ra ------------------------------------------------ process_input: beq $s7, 112, power beq $s7, 117, channel_up beq $s7, 100, channel_down beq $s7, 108, volume_up beq $s7, 107, volume_down beq $s7, 115, sleep_init beq $s7, 118, history bgt $s7, 47, end_range jr $ra end_range: power: on: off: channel_up: over: channel_down: under: channel_message: channel_time: volume_up: volume_down: volume_message: sleep_init: sleep_incr: sleep: sleep_reset: history: digit_pad_init: digit_pad: jr $ra -------------------------------------------- interupt data here, followed closely from class .ktext 0x80000180 .set noat move $k1, $at .set at sw $v0, var7 sw $a0, var8 mfc0 $k0, $13 srl $a0, $k0, 2 andi $a0, $a0, 0x1f bne $a0, $zero, no_io lui $v0, 0xFFFF lw $a2, 4($v0) # keyboard data placed in $a2 no_io: mtc0 $0, $13 mfc0 $k0, $12 andi $k0, 0xfffd ori $k0, 0x11 mtc0 $k0, $12 lw $v0, var7 lw $a0, var8 .set noat move $at, $k1 .set at eret Thanks in advance.

    Read the article

  • How do HttpOnly cookies work with AJAX requests?

    - by Shawn Simon
    JavaScript needs access to cookies if AJAX is used on a site with access restrictions based on cookies. Will HttpOnly cookies work on an AJAX site? Edit: Microsoft created a way to prevent XSS attacks by disallowing JavaScript access to cookies if HttpOnly is specified. FireFox later adopted this. So my question is: If you are using AJAX on a site, like StackOverflow, are Http-Only cookies an option? Edit 2: Question 2. If the purpose of HttpOnly is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of HttpOnly? Edit 3: Here is a quote from Wikipedia: When the browser receives such a cookie, it is supposed to use it as usual in the following HTTP exchanges, but not to make it visible to client-side scripts.[32] The HttpOnly flag is not part of any standard, and is not implemented in all browsers. Note that there is currently no prevention of reading or writing the session cookie via a XMLHTTPRequest. [33]. I understand that document.cookie is blocked when you use HttpOnly. But it seems that you can still read cookie values in the XMLHttpRequest object, allowing for XSS. How does HttpOnly make you any safer than? By making cookies essentially read only? In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object. <script type="text/javascript"> var req = null; try { req = new XMLHttpRequest(); } catch(e) {} if (!req) try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) {} if (!req) try { req = new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) {} req.open('GET', 'http://beta.stackoverflow.com/', false); req.send(null); alert(req.getAllResponseHeaders()); </script> Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated... Final Edit: Ahh, apparently both sites are wrong, this is actually a bug in FireFox. IE6 & 7 are actually the only browsers that currently fully support HttpOnly. To reiterate everything I've learned: HttpOnly restricts all access to document.cookie in IE7 & and FireFox (not sure about other browsers) HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7. XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies. edit: This information is likely no longer up to date.

    Read the article

  • What is in your Mathematica tool bag?

    - by Timo
    We all know that Mathematica is great, but it also often lacks critical functionality. What kind of external packages / tools / resources do you use with Mathematica? I'll edit (and invite anyone else to do so too) this main post to include resources which are focused on general applicability in scientific research and which as many people as possible will find useful. Feel free to contribute anything, even small code snippets (as I did below for a timing routine). Also, undocumented and useful features in Mathematica 7 and beyond you found yourself, or dug up from some paper/site are most welcome. Please include a short description or comment on why something is great or what utility it provides. If you link to books on Amazon with affiliate links please mention it, e.g., by putting your name after the link. Packages: LevelScheme is a package that greatly expands Mathematica's capability to produce good looking plots. I use it if not for anything else then for the much, much improved control over frame/axes ticks. David Park's Presentation Package ($50 - no charge for updates) Tools: MASH is Daniel Reeves's excellent perl script essentially providing scripting support for Mathematica 7. (This is finally built in as of Mathematica 8 with the -script option.) Resources: Wolfram's own repository MathSource has a lot of useful if narrow notebooks for various applications. Also check out the other sections such as Current Documentation, Courseware for lectures, and Demos for, well, demos. Books: Mathematica programming: an advanced introduction by Leonid Shifrin (web, pdf) is a must read if you want to do anything more than For loops in Mathematica. Quantum Methods with Mathematica by James F. Feagin (amazon) The Mathematica Book by Stephen Wolfram (amazon) (web) Schaum's Outline (amazon) Mathematica in Action by Stan Wagon (amazon) - 600 pages of neat examples and goes up to Mathematica version 7. Visualization techniques are especially good, you can see some of them on the author's Demonstrations Page. Mathematica Programming Fundamentals by Richard Gaylord (pdf) - A good concise introduction to most of what you need to know about Mathematica programming. Undocumented (or scarcely documented) Features: How to customize Mathematica keyboard shortcuts. See this question. How to inspect patterns and functions used by Mathematica's own functions. See this answer How to achieve Consistent size for GraphPlots in Mathematica? See this question.

    Read the article

  • Using 32bit COM object from C# or VBS on Vista 64bit and getting error 80004005

    - by alexandroid
    I need some mind reading here, since I am trying to do what I do not completely understand. There is a 32-bit application (electronic trading application called CQG) which provides a COM API for external access. I have sample programs and scripts which access this API from Excel, .NET (C++, VB and C#) and shell VBScript. I have these .NET applications as source code and as compiled executables (32-bit, compiled on Windows XP). Now I have Windows Vista Home 64-bit which makes my head to spin. Excel examples work just fine (in Excel 2003). Compiled .NET sample executables work as well. But when I am trying to run .NET C# sample converted to and compiled by Visual Studio C# Expression, or run the VBScript script, I am getting error 80004005 when trying to create an object. Initially the .NET application also gave me 80040154 but then I figured how to make it produce 32-bit code and not 64-bit, so now the errors in C# and VBScript applications are the same. That's all the progress I got for now. And yes, I tried running 32-bit versions of cscript.exe/WScript from SysWOW64 folder on my VBS, but still the result is the same (80004005). How to solve this problem? I am almost ready to believe it is practically impossible, but the fact that Excel VBA works and .NET executables compiled on Windows XP run just fine just makes me angry. There should be a way to beat this thing (some secret which probably only Windows Vista developers know)! I will appreciate any help! PS: I believe code samples do not make much sense here, but this is the line of VBScript which fails: Set CEL = WScript.CreateObject("CQG.CQGCEL.4.0", "CEL_") And this is C#: CQGCEL CEL = new CQGCEL(); Update: Forgot to say UAC is off, of course. And I am working from account with administrator priviledges. I also tried watching which registry keys are read using Process Monitor but everything looks OK for GUIDs of this object. I could not recognize some other GUIDs so I am not sure whether they were critical or not. Is there a chance that this COM object uses Internet Explorer and gets the wrong one (like Internet Explorer 7 instead of Internet Explorer 6 engine or something)?

    Read the article

  • How do I backup and restore the system clipboard in C#?

    - by gtaborga
    Hey everyone, I will do my best to explain in detail what I'm trying to achieve. I'm using C# with IntPtr window handles to perform a CTRL-C copy operation on an external application from my own C# application. I had to do this because there was no way of accessing the text directly using GET_TEXT. I'm then using the text content of that copy within my application. The problem here is that I have now overwritten the clipboard. What I would like to be able to do is: Backup the original contents of the clipboard which could have been set by any application other than my own. Then perform the copy and store the value into my application. Then restore the original contents of the clipboard so that the user still has access to his/her original clipboard data. This is the code I have tried so far: private void GetClipboardText() { text = ""; IDataObject backupClipboad = Clipboard.GetDataObject(); KeyboardInput input = new KeyboardInput(this); input.Copy(dialogHandle); // Performs a CTRL-C (copy) operation IDataObject clipboard = Clipboard.GetDataObject(); if (clipboard.GetDataPresent(DataFormats.Text)) { // Retrieves the text from the clipboard text = clipboard.GetData(DataFormats.Text) as string; } if (backupClipboad != null) { Clipboard.SetDataObject(backupClipboad, true); // throws exception } } I am using the System.Windows.Clipboard and not the System.Windows.Forms.Clipboard. The reason for this was that when I performed the CTRL-C, the Clipboard class from System.Windows.Forms did not return any data, but the system clipboard did. I looked into some of the low level user32 calls like OpenClipboard, EmptyClipboard, and CloseClipboard hoping that they would help my do this but so far I keep getting COM exceptions when trying to restore. I thought perhaps this had to do with the OpenClipboard parameter which is expecting an IntPtr window handle of the application which wants to take control of the clipboard. Since I mentioned that my application does not have a GUI this is a challenge. I wasn't sure what to pass here. Maybe someone can shed some light on that? Am I using the Clipboard class incorrectly? Is there a clear way to obtain the IntPtr window handle of an application with no GUI? Does anyone know of a better way to backup and restore the system clipboard?

    Read the article

  • Flash AS3 blur or liquify part of an image with mouse

    - by hamlet
    Hi, I am very beginner in flash. I want to load an image, show a cursor over the image and on mousedown I want to blur that actual part of the image. (e.g you can blur your face on the image and then save the new image). I can delete parts of the image with white line, but I would like to blur it instead // LIVE JPEG ENCODER 0.3 // from bytearray.org import asfiles.encoding.JPEGEncoder; import flash.external.ExternalInterface; ExternalInterface.addCallback("flash_saveImage", inflash_saveImage); var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.COMPLETE, handleComplete); loader.load(new URLRequest(loaderInfo.parameters._filename)); //loader.load(new URLRequest("b.jpg")); var container_mc:MovieClip = new MovieClip;//create movieclip function handleComplete(e:Event):void { addChild(container_mc); var bitmapData:BitmapData = Bitmap(e.target.content).bitmapData; var matrix:Matrix = new Matrix(); container_mc.graphics.clear(); container_mc.graphics.beginBitmapFill(bitmapData, matrix, false); //container_mc.graphics.beginFill(0xFFFFFF,0); container_mc.graphics.drawRect(0, 0, bitmapData.width, bitmapData.height); container_mc.graphics.endFill(); swapChildren(container_mc, pencil); container_mc.addEventListener(MouseEvent.MOUSE_DOWN, startDrawing); container_mc.addEventListener(MouseEvent.MOUSE_UP, stopDrawing); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } stage.addEventListener(MouseEvent.MOUSE_MOVE, moveCursor); Mouse.hide(); function moveCursor(event:MouseEvent):void { pencil.x = event.stageX; pencil.y = event.stageY; } function startDrawing(event:MouseEvent):void{ container_mc.graphics.lineStyle(20, 0xFFFFFF, 1); container_mc.graphics.moveTo(mouseX, mouseY); container_mc.addEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function stopDrawing(event:MouseEvent):void{ container_mc.removeEventListener(MouseEvent.MOUSE_MOVE, makeLine); } function makeLine(event:MouseEvent):void{ container_mc.graphics.lineTo(mouseX, mouseY); } function inflash_saveImage ( ):void { var myURLLoader:URLLoader = new URLLoader(); var myBitmapSource:BitmapData = new BitmapData ( container_mc.width, container_mc.height ); // render the player as a bitmapdata myBitmapSource.draw ( container_mc ); // create the encoder with the appropriate quality var myEncoder:JPEGEncoder = new JPEGEncoder( 80 ); // generate a JPG binary stream to have a preview var myCapStream:ByteArray = myEncoder.encode ( myBitmapSource ); var header:URLRequestHeader = new URLRequestHeader ("Content-type", "application/octet-stream"); var myRequest:URLRequest = new URLRequest ( "save.php" ); myRequest.requestHeaders.push (header); myRequest.method = URLRequestMethod.POST; myRequest.data = myCapStream; myURLLoader.load ( myRequest ); } Thanks, hamlet

    Read the article

  • Error when compiling code with the Protege API

    - by Anto
    I am new to Protege API and I have just created on Eclipse a small application which uses an external OWL file. Also I did import all the necessary libraries. import java.util.Collection; import java.util.Iterator; import edu.stanford.smi.protege.exception.OntologyLoadException; import edu.stanford.smi.protegex.owl.ProtegeOWL; import edu.stanford.smi.protegex.owl.model.*; public class Trial { public static void main(String[] args) throws OntologyLoadException{ String uri = "C:/Documents and Settings/Anto/Desktop/travel.owl"; OWLModel owlModel = ProtegeOWL.createJenaOWLModelFromURI(uri); Collection classes = owlModel.getUserDefinedOWLNamedClasses(); for(Iterator it = classes.iterator(); it.hasNext();){ OWLNamedClass cls = (OWLNamedClass) it.next(); Collection instances = cls.getInstances(false); System.out.println("Class " + cls.getBrowserText()+ " (" + instances.size()+")"); for(Iterator jt = instances.iterator(); jt.hasNext();){ OWLIndividual individual = (OWLIndividual) jt.next(); System.out.println(" - "+ individual.getBrowserText()); } } } } When I do compile however I get the following errors: WARNING: [Local Folder Repository] The specified file must be a directory. (C:\Documents and Settings\Anto\My Documents\Eclipse Workspace\ProtegeTrial\plugins\edu.stanford.smi.protegex.owl) LocalFolderRepository.update() SEVERE: Exception caught -- java.net.URISyntaxException: Illegal character in path at index 12: C:/Documents and Settings/CiuffreA/Desktop/travel.owl at java.net.URI$Parser.fail(URI.java:2809) at java.net.URI$Parser.checkChars(URI.java:2982) at java.net.URI$Parser.parseHierarchical(URI.java:3066) at java.net.URI$Parser.parse(URI.java:3014) at java.net.URI.<init>(URI.java:578) at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.getFileURI(Unknown Source) at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.loadKnowledgeBase(Unknown Source) at edu.stanford.smi.protege.model.Project.loadDomainKB(Unknown Source) at edu.stanford.smi.protege.model.Project.createDomainKnowledgeBase(Unknown Source) at edu.stanford.smi.protegex.owl.jena.creator.OwlProjectFromUriCreator.create(Unknown Source) at edu.stanford.smi.protegex.owl.ProtegeOWL.createJenaOWLModelFromURI(Unknown Source) at Trial.main(Trial.java:14) Exception in thread "main" java.lang.NullPointerException at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.loadKnowledgeBase(Unknown Source) at edu.stanford.smi.protege.model.Project.loadDomainKB(Unknown Source) at edu.stanford.smi.protege.model.Project.createDomainKnowledgeBase(Unknown Source) at edu.stanford.smi.protegex.owl.jena.creator.OwlProjectFromUriCreator.create(Unknown Source) at edu.stanford.smi.protegex.owl.ProtegeOWL.createJenaOWLModelFromURI(Unknown Source) at Trial.main(Trial.java:14) Does anyone have an idea on where the problem should be?

    Read the article

  • How to run a powershell script within a DOS batch file

    - by Don Vince
    How do I have a powershell script embedded within the same file as a DOS batch script? I know this kind of thing is possible in other scenarios: Embedding SQL in a DOS batch script using sqlcmd and a clever arrangements of goto's and comments at the beginning of the file In a *nix environment having a the name of the program you wish to run the script with on the first line of the script commented out e.g. #!/usr/local/bin/python There may not be a way to do this - in which case I will have to call the separate powershell script from the launching DOS script. One possible solution I've considered is to echo out the powershell script, and then run it. A good reason to not do this is that part of the reason to attempt this is to be using the advantages of the powershell environment without the pain of, for example, DOS escape characters I have some unusual constraints and would like to find an elegant solution. I suspect this question may be baiting responses of the variety: "why don't you try and solve this different problem instead." Suffice to say these are my constraints, sorry about that. Any ideas? Is there a suitable combination of clever comments and escape characters that will enable me to achieve this? Some thoughts on how to achieve this: A carat ^ at the end of a line in DOS is a continuation - like an underscore in VB An ampersand & in DOS typically is used to separate commands echo Hello & echo World results in 2 echos on separate lines %0 will give you the script that's currently running So something like this (if I could make it work) would be good: # & call powershell -psconsolefile %0 # & goto :EOF /* From here on in we're running nice juicy powershell code */ Write-Output "Hello World" Except... It doesn't work... because the extension of the file isn't as per powershell's liking: Windows PowerShell console file "insideout.bat" extension is not psc1. Windows PowerShell console file extension must be psc1. DOS isn't really altogether happy with the situation either - although it does stumble on '#' is not recognized as an internal or external command, operable program or batch file.

    Read the article

  • jqmodal IE (7 or 8) flashes black before modal loaded

    - by brad
    This is killing me. In both IE7 and 8, using jqModal, the screen flashes black before the modal content is loaded. I've set up a test app to show you what's happening. I've taken jqModal EXACTLY from the site, no changes whatsoever, no external css that could be affecting my app. It works perfectly in every other browser (including IE6). http://jqmtest.heroku.com/ So, first two links are ajax calls, second is straight up inline HTML. (I originally thought it was the ajax that was affecting it, but that doesn't seem to be the case, I then thought it was slow loading ajax, hence to two differen ajax links) What's crazy is that the jqmodal site itself works perfectly in IE, no flashing of black, but I can't see what I'm doing wrong. Code is straight forward html: <body> <div id="ajaxModal" class="jqmWindow"></div> <div id="inlineModal" class="jqmWindow"> <div style="height:300px;position:relative;"> <p>Here's some inline content</p> <a href="#" onclick='$("#inlineModal").jqmHide();return false;' style="position:absolute;bottom:10px;right:10px">Close</a> </div> </div> <div style="width:600px;height:400px;margin:auto;background:#eee;"> <p><a href="/ajax/short" class="jqModal">Short loading modal</a></p> <br /> <p><a href="/ajax/long" class="jqModal">Longer loading modal</a></p> <br /> <p><a href="#" class="jqInline">inline modal</a></p> </div> </body> Javascript: <script type="text/javascript"> $(function(){ $("#ajaxModal").jqm({ajax:'@href', modal:true}); $("#inlineModal").jqm({modal:true, trigger:'.jqInline'}); }); </script> CSS is exactly the same as the one downloaded from jqModal's site so I'll omit it, but you can see it on my app Has anyone experienced this? I don't get how his works and mine doesn't.

    Read the article

  • Why is jQuery .load() firing twice?

    - by LeslieOA
    Hello S-O. I'm using jQuery 1.4 with jQuery History and trying to figure out why Firebug/Web Inspector are showing 2 XHR GET requests on each page load (double that amount when visiting my sites homepage (/ or /#). e.g. Visit this (or any) page with Firebug enabled. Here's the edited/relevant code (see full source): - $(document).ready(function() { $('body').delegate('a', 'click', function(e) { var hash = this.href; if (hash.indexOf(window.location.hostname) > 0) { /* Internal */ hash = hash.substr((window.location.protocol+'//'+window.location.host+'/').length); $.historyLoad(hash); return false; } else if (hash.indexOf(window.location.hostname) == -1) { /* External */ window.open(hash); return false; } else { /* Nothing to do */ } }); $.historyInit(function(hash) { $('#loading').remove(); $('#container').append('<span id="loading">Loading...</span>'); $('#ajax').animate({height: 'hide'}, 'fast', 'swing', function() { $('#page').empty(); $('#loading').fadeIn('fast'); if (hash == '') { /* Index */ $('#ajax').load('/ #ajax','', function() { ajaxLoad(); }); } else { $('#ajax').load(hash + ' #ajax', '', function(responseText, textStatus, XMLHttpRequest) { switch (XMLHttpRequest.status) { case 200: ajaxLoad(); break; case 404: $('#ajax').load('/404 #ajax','', ajaxLoad); break; // Default 404 default: alert('We\'re experiencing technical difficulties. Try refreshing.'); break; } }); } }); // $('#ajax') }); // historyInit() function ajaxLoad() { $('#loading').fadeOut('fast', function() { $(this).remove(); $('#ajax').animate({height: 'show', opacity: '1'}, 'fast', 'swing'); }); } }); A few notes that may be helpful: - Using WordPress with default/standard .htaccess I'm redirecting /links-like/this to /#links-like/this via JavaScript only (PE) I'm achieving the above with window.location.replace(addr); and not window.location=addr; Feel free to visit my site if needed. Thanks in advanced.

    Read the article

  • Using sem_t in a Qt Project

    - by thauburger
    Hi everyone, I'm working on a simulation in Qt (C++), and would like to make use of a Semaphore wrapper class I made for the sem_t type. Although I am including semaphore.h in my wrapper class, running qmake provides the following error: 'sem_t does not name a type' I believe this is a library/linking error, since I can compile the class without problems from the command line. I've read that you can specify external libraries to include during compilation. However, I'm a) not sure how to do this in the project file, and b) not sure which library to include in order to access semaphore.h. Any help would be greatly appreciated. Thanks, Tom Here's the wrapper class for reference: Semaphore.h #ifndef SEMAPHORE_H #define SEMAPHORE_H #include <semaphore.h> class Semaphore { public: Semaphore(int initialValue = 1); int getValue(); void wait(); void post(); private: sem_t mSemaphore; }; #endif Semaphore.cpp #include "Semaphore.h" Semaphore::Semaphore(int initialValue) { sem_init(&mSemaphore, 0, initialValue); } int Semaphore::getValue() { int value; sem_getvalue(&mSemaphore, &value); return value; } void Semaphore::wait() { sem_wait(&mSemaphore); } void Semaphore::post() { sem_post(&mSemaphore); } And, the QT Project File: TARGET = RestaurantSimulation TEMPLATE = app QT += SOURCES += main.cpp \ RestaurantGUI.cpp \ RestaurantSetup.cpp \ WidgetManager.cpp \ RestaurantView.cpp \ Table.cpp \ GUIFood.cpp \ GUIItem.cpp \ GUICustomer.cpp \ GUIWaiter.cpp \ Semaphore.cpp HEADERS += RestaurantGUI.h \ RestaurantSetup.h \ WidgetManager.h \ RestaurantView.h \ Table.h \ GUIFood.h \ GUIItem.h \ GUICustomer.h \ GUIWaiter.h \ Semaphore.h FORMS += RestaurantSetup.ui LIBS += Full Compiler Output: g++ -c -pipe -g -gdwarf-2 -arch i386 -Wall -W -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED - I/usr/local/Qt4.6/mkspecs/macx-g++ -I. - I/Library/Frameworks/QtCore.framework/Versions/4/Headers -I/usr/include/QtCore - I/Library/Frameworks/QtGui.framework/Versions/4/Headers -I/usr/include/QtGui - I/usr/include -I. -I. -F/Library/Frameworks -o main.o main.cpp In file included from RestaurantGUI.h:10, from main.cpp:2: Semaphore.h:14: error: 'sem_t' does not name a type make: *** [main.o] Error 1 make: Leaving directory `/Users/thauburger/Desktop/RestaurantSimulation' Exited with code 2. Error while building project RestaurantSimulation When executing build step 'Make'

    Read the article

  • How do you model roles / relationships with Domain Driven Design in mind?

    - by kitsune
    If I have three entities, Project, ProjectRole and Person, where a Person can be a member of different Projects and be in different Project Roles (such as "Project Lead", or "Project Member") - how would you model such a relationship? In the database, I currently have the following tablers: Project, Person, ProjectRole Project_Person with PersonId & ProjectId as PK and a ProjectRoleId as a FK Relationship. I'm really at a loss here since all domain models I come up with seem to break some "DDD" rule. Are there any 'standards' for this problem? I had a look at a Streamlined Object Modeling and there is an example what a Project and ProjectMember would look like, but AddProjectMember() in Project would call ProjectMember.AddProject(). So Project has a List of ProjectMembers, and each ProjectMember in return has a reference to the Project. Looks a bit convoluted to me. update After reading more about this subject, I will try the following: There are distinct roles, or better, model relationships, that are of a certain role type within my domain. For instance, ProjectMember is a distinct role that tells us something about the relationship a Person plays within a Project. It contains a ProjectMembershipType that tells us more about the Role it will play. I do know for certain that persons will have to play roles inside a project, so I will model that relationship. ProjectMembershipTypes can be created and modified. These can be "Project Leader", "Developer", "External Adviser", or something different. A person can have many roles inside a project, and these roles can start and end at a certain date. Such relationships are modeled by the class ProjectMember. public class ProjectMember : IRole { public virtual int ProjectMemberId { get; set; } public virtual ProjectMembershipType ProjectMembershipType { get; set; } public virtual Person Person { get; set; } public virtual Project Project { get; set; } public virtual DateTime From { get; set; } public virtual DateTime Thru { get; set; } // etc... } ProjectMembershipType: ie. "Project Manager", "Developer", "Adviser" public class ProjectMembershipType : IRoleType { public virtual int ProjectMembershipTypeId { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } // etc... }

    Read the article

  • XML Schema to Java Classes with XJC

    - by nevets1219
    I am using xjc to generate Java classes from the XML schema and the following is an excerpt of the XSD. <xs:element name="NameInfo"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element ref="UnstructuredName"/> <!-- This line --> <xs:sequence> <xs:element ref="StructuredName"/> <xs:element ref="UnstructuredName" minOccurs="0"/> <!-- and this line! --> </xs:sequence> </xs:choice> <xs:element ref="SomethingElse" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:element> For the most part the generated classes are fine but for the above block I would get something like: public List<Object> getContent() { if (content == null) { content = new ArrayList<Object>(); } return this.content; } with the following comment above it: * You are getting this "catch-all" property because of the following reason: * The field name "UnstructuredName" is used by two different parts of a schema. See: * line XXXX of file:FILE.xsd * line XXXX of file:FILE.xsd * To get rid of this property, apply a property customization to one * of both of the following declarations to change their names: * Gets the value of the content property. I have placed a comment at the end of the two line in question. At the moment, I don't think it will be easy to change the schema since this was decided between vendors and I would not want to go this route (if possible) as it will slow down progress quite a bit. I searched and have found this page, is the external customization what I want to do? I have been mostly working with the generated classes so I'm not entirely familiar with the process that generates these classes. A simple example of the "property customization" would be great! Alternative method of generating the Java classes would be fine as long as the schema can still be used.

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy"> <property name="proxy.host" value="xyz.proxy.net"/> <property name="proxy.port" value="8443"/> <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/> </target> &lt;!-- resolve the dependencies of stratus --&gt; &lt;target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"&gt; &lt;ivy:settings file="stratus-ivysettings.xml" /> &lt;ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/&gt;<!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/[organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >