Search Results

Search found 571 results on 23 pages for 'intention'.

Page 19/23 | < Previous Page | 15 16 17 18 19 20 21 22 23  | Next Page >

  • Aborting $.post() / responsive search results

    - by Emphram Stavanger
    I have the following kludgey code; HTML <input type="search" id="search_box" /> <div id="search_results"></div> JS var search_timeout, search_xhr; $("#search_box").bind("textchange", function(){ clearTimeout(search_timeout); search_xhr.abort(); search_term = $(this).val(); search_results = $("#search_results"); if(search_term == "") { if(search_results.is(":visible")) search_results.stop().hide("blind", 200); } else { if(search_results.is(":hidden")) search_results.stop().show("blind", 200); } search_timeout = setTimeout(function () { search_xhr = $.post("search.php", { q: search_term }, function(data){ search_results.html(data); }); }, 100); }); (uses the textchange plugin by Zurb) The problem I had with my original more simple code was that it was horribly unresponsive. Results would appear seconds later, especially when typed slower, or when Backspace was used, etc. I made all this, and the situation isn't much better. Requests pile up. My original intention is to use .abort() to cancel out whatever previous request is still running as the textchange event is fired again (as per 446594). This doesn't work, as I get repeated errors like this in console; Uncaught TypeError: Cannot call method 'abort' of undefined How can I make .abort() work in my case? Furthermore, is this approach the best way to fetch 'realtime' search results? Much like Facebook's search bar, which gives results as the user types, and seems to be very quick on its feet.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him. Originally published on blogs.oracle.com/javaone.

    Read the article

  • Talking JavaOne with Rock Star Simon Ritter

    - by Janice J. Heiss
    Oracle’s Java Technology Evangelist Simon Ritter is well known at JavaOne for his quirky and fun-loving sessions, which, this year include: CON4644 -- “JavaFX Extreme GUI Makeover” (with Angela Caicedo on how to improve UIs in JavaFX) CON5352 -- “Building JavaFX Interfaces for the Real World” (Kinect gesture tracking and mind reading) CON5348 -- “Do You Like Coffee with Your Dessert?” (Some cool demos of Java of the Raspberry Pi) CON6375 -- “Custom JavaFX Charts: (How to extend JavaFX Chart controls with some interesting things) I recently asked Ritter about the significance of the Raspberry Pi, the topic of one of his sessions that consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there's one definitive thing that makes the RP significant,” observed Ritter, “but a combination of things that really makes it stand out. First, it's the cost: $35 for what is effectively a completely usable computer. OK, so you have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM processor is also significant, as it avoids problems like cooling (no heat sink or fan) and can use a USB power brick.  Combine these two things with the immense groundswell of community support and it provides a fantastic platform for teaching young and old alike about computing, which is the real goal of the project.”He informed me that he’ll be at the Raspberry Pi meetup on Saturday (not part of JavaOne). Check out the details here.JavaFX InterfacesWhen I asked about how JavaFX can interface with the real world, he said that there are many ways. “JavaFX provides you with a simple set of programming interfaces that can create complex, cool and compelling user interfaces,” explained Ritter. “Because it's just Java code you can combine JavaFX with any other Java library to provide data to display and control the interface. What I've done for my session is look at some of the possible ways of doing this using some of the amazing hardware that's available today at very low cost. The Kinect sensor has added a new dimension to gaming in terms of interaction; there's a Java API to access this so you can easily collect skeleton tracking data from it. Some clever people have also written libraries that can track gestures like swipes, circles, pushes, and so on. We use these to control parts of the UI. I've also experimented with a Neurosky EEG sensor that can in some ways ‘read your mind’ (well, at least measure some of the brain functions like attention and meditation).  I've written a Java library for this that I include as a way of controlling the UI. We're not quite at the stage of just thinking a command though!” Here Comes Java EmbeddedAnd what, from Ritter’s perspective, is the most exciting thing happening in the world of Java today? “I think it's seeing just how Java continues to become more and more pervasive,” he said. “One of the areas that is growing rapidly is embedded systems.  We've talked about the ‘Internet of things’ for many years; now it's finally becoming a reality. With the ability of more and more devices to include processing, storage and networking we need an easy way to write code for them that's reliable, has high performance, and is secure. Java fits all these requirements. With Java Embedded being a conference within a conference, I'm very excited about the possibilities of Java in this space.”Check out Ritter’s sessions or say hi if you run into him.

    Read the article

  • TiVo Follow-up&hellip;Training Opportunities

    - by MightyZot
    A few posts ago I talked about my experience with TiVo Customer Service. While I didn’t receive bad service per se, I felt like the reps could have communicated better. I made the argument that it should be just as easy to leave a company as it is to engage with a company, even though my intention is to remain a TiVo fan. I worked for DataStorm Technologies in the early 90s. I pointed out to another developer that we were leaving files behind in our installations. My opinion was that, if the customer is uninstalling our application, there should be no trace of it left after uninstall except for the customer’s data. He replied with, “screw ‘em. They’re leaving us. Why do we care if we left anything behind?” Wow. Surely there is a lot of arrogance in that statement. Think about this…how often do you change your services, devices, or whatever?  Personally, I change things up about once every two or three years. If I don’t change things up, I at least think about it. So, every two or three years there is an opportunity for you (as a vendor or business) to sell me something. (That opportunity actually exists all the time, because there are many of these two or three year periods overlapping.) Likewise, you have the opportunity to win back my business every two or three years as well. Customer service on exit is just as important as customer service during engagement because, every so often, you have another chance to gain back my loyalty. If you screw that up on exit, your chances are close to zero. In addition, you need to consider all of the potential or existing customers that are part of or affected by my social organizations. “Melissa” at TiVo gave me a call last week and set up some time to talk about my experience. We talked yesterday and she gave me a few moments to pontificate about my thoughts on the importance of a complete customer experience. She had listened to my customer support calls and agreed that I had made it clear that I intended to remain a TiVo customer even though suddenLink is handling my subscription. She said that suddenLink is a very important partner for them and, of course, they want to do everything they can to support TiVo / suddenLink customers.  “Melissa” also said that they had turned this experience into a training opportunity for the reps involved. I hope that is true, because that “programmer arrogance” that I mentioned above (which was somewhat pervasive back then) may be part of the reason why that company is no longer around. Good job “Melissa”!  And, like I said, I am still a TiVo fan. In fact, we love our new TiVo and many of the great new features. In addition, if you’re one of the two people that read these posts, please remember that these are just opinions. Your experiences may be, and likely will be, completely unique to you.

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • Agile Manifesto, Revisited

    - by GeekAgilistMercenary
    Again, conversations give me a zillion things to write about.  The recent conversation that has cropped up again is my various viewpoints of the Agile Manifesto.  Not all the processes that came after the manifesto was written, but just the core manifesto itself.  Just for context, here is the manifesto in all the glory. We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Several of the key signatories at the time went on to write some of the core books that really gave Agile Software Development traction.  If you check out the Agile Manifesto Site and do a search for any of those people, you will find a treasure trove of software development information. My 2 Cents First off, I agree with a few people out there.  Agile is not Scrum for instance.  Do NOT get these things confused when checking out Agile, or pushing forward with Scrum.  As David Starr points out in his blog entry, "About 35 minutes into this discussion, I realized I hadn?t heard a question or comment that wasn?t related to Scrum. I asked the room, ?How many people are on an agile team that is NOT using Scrum?? 5 hands. Seriously, out of about 150 people of so. 5 hands." So know, as this is one of my biggest pet peves these days, that Scrum is not Agile.  Another quote David writes, "I assure you, dear reader, 2 week time boxes does not an agile team make." This is the exact problem.  Take a look at the actual manifesto above.  First ideal, "Individuals and interactions over processes and tools".  There are a couple of meanings in this ideal, just as there are in the other written ideals.  But this one has a lot of contention with a set practice such as Scrum.  There are other formulas, namely XP (eXtreme) and Kanban are two that come to mind often.  But none of these are Agile, but instead a process based on the ideals of Agile. Some of you may be thinking, "that?s the same thing".  Well, no, it is not.  This type of differentiation is vitally important.  Agile is a set of ideals.  Processes are nice, but they can change, they may work for some and not others.  The Agile Manifesto covers the ideals behind what is intended, that intention being to learn and find new ways to build better software. Ideals, not processes.  Definition versus implementation.  Class versus object.  The ideals are of utmost importance, the processes are secondary, the first ideal is what really lays this out for me "Individuals and interactions over processes and tools".  Yes, we need tools but we need the individuals and their interactions more. For those coming into a development team, I hope you take this to mind.  It is of utmost importance that this differentiation is known and fought for.  The second the process becomes more important than the individuals and interactions, the team will effectively lose the advantages of Agile Ideals. This is just one of my first thoughts on the topic of Agile.  I will be writing more in the near future about each of the ideals.  I will make a point to outline more of my thoughts, my opinions, and experience with the ideals of Agile and the various processes that are out there.  Maybe, I may stumble upon something new with the help of my readers?  It would be a grand overture to the ideals I hold. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • Take a chance !

    - by Hartmut Wiese
    Hi everybody, Later today I am going to reach out to the JDE Partner in EMEA I am already in contact with and ask for participation and collaboration within the new EMEA JDE Partner Community. I am very excited about this community and I really believe we will have much more success in the future selling and implementing JDEdwards in this large region. For those who don´t know me yet ... I am really a long time in the JDEdwards business. I have been a JDE PreSales Consultant and joined JDEdwards in 1998 in Germany. After JDEdwards/PeopleSoft was aquired by Oracle I changed my role and become responsible on an EMEA level for the Oracle Accelerate and the Oracle Business Accelerator program. A lot of you are already know me ... and hopefully believe and trust me as well. Within the last five months I talked to approx. 60 partners already face-to-face during the various events I attended. We had two PreSales Universities delievered already and I have been to one JDE Exsite event, a JDE Executive Forum, two User groups events and one JDE Partner Event. Again approximately 60+ partner discussions and everybody likes the idea of the community and how I am going to run this in the future. At the JDEdwards UK User Group event (NOV 13) there was an external speaker talking about risk. It was a very good speech. One key element of his speech was that a sequence of (small) failures might lead to a big success. He gave very good examples from the history not software related at all but as a results some of the well done individuals everybody knows today started very small and they failed several times before they become successful. But these persons did not gave up and in the long run they win and succeeded. I really spent some time reflecting this to our business as of today. My intention to write these lines is to convince each partner out there to think about investing in JDEdwards TODAY. There are currently a number of potential investment ideas on the table for you. We have a very strong and powerful ERP System. We have advantages against all our competitors. Each partner has the ability to create his own SaaS model and deliver individual services to the customers. We also have three Business Accelerators available which really speeds up the implementation by still having full flexibility to change for example any processing option if needed. A huge number of customers are on old releases globally and think about upgrading. New technology makes new business processes available (e.g. iPad). Oracle is a pretty much forward looking company and we build tools and products. In the area of JDEdwards our partners are combining the Oracle tools and products and bringing the value to the customers. At one point in time you have decided to run your business on your own and to become a JDE/PSFT/ORCL partner. This was a risk of course at that point of time. You did not fail and this is very good of course. Business has changed and Oracle has the product and tools for you to become even more successful in the future but it is a very good time for you to take a risk again. I am not able to promise you anything but the situation is very good. You might not win every deal or increase your margin immediately but I truly believe you will find new ways of doing your business in the future by adopting some of our ideas. The only person who can stop you ... is you. Please try something new/different. Success sometimes needs some time and initial failures but if you never failed - you have never lived. To get support during this phase please share your doubts, thoughts, experiences inside the new JDEdwards community and learn from others who went to similar processes. Please join here. Take care and best regards Hartmut Wiese

    Read the article

  • Get to Know a Candidate (8 of 25): Rocky Anderson&ndash;Justice Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Ross Carl “Rocky” Anderson served two terms as the 33rd mayor of Salt Lake City, Utah, between 2000 and 2008.  He is the Executive Director of High Road for Human Rights.  Prior to serving as Mayor, he practiced law for 21 years in Salt Lake City, during which time he was listed in Best Lawyers in America, was rated A-V (highest rating) by Martindale-Hubbell, served as Chair of the Utah State Bar Litigation Section[4] and was Editor-in-Chief of, and a contributor to, Voir Dire legal journal. As mayor, Anderson rose to nationwide prominence as a champion of several national and international causes, including climate protection, immigration reform, restorative criminal justice, LGBT rights, and an end to the "war on drugs". Before and after the invasion by the U.S. of Iraq in 2003, Anderson was a leading opponent of the invasion and occupation of Iraq and related human rights abuses. Anderson was the only mayor of a major U.S. city who advocated for the impeachment of President George W. Bush, which he did in many venues throughout the United States. Anderson's work and advocacy led to local, national, and international recognition in numerous spheres, including being named by Business Week as one of the top twenty activists in the world on climate change,serving on the Newsweek Global Environmental Leadership Advisory Board, and being recognized by the Human Rights Campaign as one of the top ten straight advocates in the United States for LGBT equality. He has also received numerous awards for his work, including the EPA Climate Protection Award, the Sierra Club Distinguished Service Award, the Respect the Earth Planet Defender Award, the National Association of Hispanic Publications Presidential Award, The Drug Policy Alliance Richard J. Dennis Drugpeace Award, the Progressive Democrats of America Spine Award, the League of United Latin American Citizens Profile in Courage Award, the Bill of Rights Defense Committee Patriot Award, the Code Pink (Salt Lake City) Pink Star honor, the Morehouse University Gandhi, King, Ikeda Award, and the World Leadership Award for environmental programs. Formerly a member of the Democratic Party, Anderson expressed his disappointment with that Party in 2011, stating, “The Constitution has been eviscerated while Democrats have stood by with nary a whimper. It is a gutless, unprincipled party, bought and paid for by the same interests that buy and pay for the Republican Party." Anderson announced his intention to run for President in 2012 as a candidate for the newly-formed Justice Party. Although founded by Rocky Anderson of Utah, the Justice Party was first recognized by Mississippi and describes itself as advocating economic justice through measures such as green jobs and a right to organize, environment justice through enforcing employee safeguards in trade agreements, and social and civic justice through universal health care. In its first press release, the Utah Justice Party set forth its goals for justice in the economic, environmental, social and civic realms, along with a call to rid the corrupting influence of big money from government, to reverse the erosion of rights guaranteed by the Constitution, and to stop draining American resources to support illegal wars of aggression. Its press release says its grassroots supporters believe that now is the time for all to "shed their skeptical view that their voices don't matter", that "our 2-party system is a 'duopoly' controlled by the same corporate and military interests", and that the people must act to ensure "that our nation will achieve a brighter, sustainable future.” Anderson has ballot access in CO, CT, FL, ID, LA, MI, MN, MS, NJ, NM, OR, RI, TN, UT, VT, WA (152 electoral votes) and has write-in access in AL, AK, DE, GA, IL, IO, KS, MD, MO, NE, NH, NY, PA, TX Learn more about Rocky Anderson and Justice Party on Wikipedia.

    Read the article

  • Don't Forget To Enjoy Life

    - by Justin
    I have a pretty clear stance on posting personal information in my blogs. I tend to avoid it almost instinctively. Part of that is because I am a somewhat private person. And the other is because I know how easy it is for personal information to be gathered and collected from sources such as blogs. So, this has remained a tech only blog for me. I've only posted topics mostly related to issues I have encountered at work. In a way this blog is a 'bookmark' for me. If I post something here and run into the issue again it allows me to refer back to a convenient place where the 'fix' is documented in a way that I understand. But today, I am posting something that speaks to everyone. Something PERSONAL. Honestly, I expect this entry to receive zero views. But if nothing else, I can come back to this blog one day when I'm having a bad day or something and run across this post. And I will be reminded... DON'T FORGET TO ENJOY LIFE. Say this to yourself out loud, right now. People, we can get caught up in some rather mundane details as we trek through life. It's so easy to lose track of what really matters that it should be no surprise to find yourself reading something like this and thinking to yourself 'Yeah. You are right, man. Some of this crap I'm clinging on to right now is so small in the grand scheme of things'. I have no reservation, no shame, in saying that I am more often than not caught up in the ever evolving world of 'shit that does not matter'. When you work in technology, you are surrounded by deadlines, upgrades, new versions, support 'end of life', etc. And by time you get done with your 8 hours you go home and put in a few more because you are STILL CAUGHT UP in the things you dealt with at work all day. DO YOURSELF A FAVOR. DO YOUR FAMILY AND FRIEND A FAVOR. When you are done for the day, and you drive home, get those work-related things out of your head before you pull into the driveway. If you are still thinking on them when you park the car, leave the engine running, close your eyes and take a deep breath. If you believe in God, pray. If you don't then meditate for a second with the INTENTION of letting go of the day and becoming the 'real you'. You may have forgotten who the real you is so I'll remind you.... THE REAL YOU IS THAT GUY OR GAL THAT LAUGHS, LOVES, AND LIVES. Be the real you as often as possible. If you can't do it during your 9 - 5, do it at home. YOUR RELATIONSHIPS AND YOUR PERSONAL HAPPINESS DEPEND ON IT. I am going to make you a promise right now. If you do what I've just said, your days will be longer and your joy will be exponential. I can't explain why I know this to be true. But I do know it. And if you are there reading this right now, you know it is true too. We both know it is true because it COMES FROM WITHIN EVERY MAN, WOMAN and CHILD. We are born into love and happiness. Lets not fade away into the darkness so easily found in this world. Lets keep the flame burning. The flame of passion. Passion for LIFE. Peace be with you.

    Read the article

  • Conflict Minerals - Design to Compliance

    - by C. Chadwick
    Dr. Christina  Schröder - Principal PLM Consultant, Enterprise PLM Solutions EMEA What does the Conflict Minerals regulation mean? Conflict Minerals has recently become a new buzz word in the manufacturing industry, particularly in electronics and medical devices. Known as the "Dodd-Frank Section 1502", this regulation requires SEC listed companies to declare the origin of certain minerals by 2014. The intention is to reduce the use of tantalum, tungsten, tin, and gold which originate from mines in the Democratic Republic of Congo (DRC) and adjoining countries that are controlled by violent armed militia abusing human rights. Manufacturers now request information from their suppliers to see if their raw materials are sourced from this region and which smelters are used to extract the metals from the minerals. A standardized questionnaire has been developed for this purpose (download and further information). Soon, even companies which are not directly affected by the Conflict Minerals legislation will have to collect and maintain this information since their customers will request the data from their suppliers. Furthermore, it is expected that the public opinion and consumer interests will force manufacturers to avoid the use of metals with questionable origin. Impact for existing products Several departments are involved in the process of collecting data and providing conflict minerals compliance information. For already marketed products, purchasing typically requests Conflict Minerals declarations from the suppliers. In order to address requests from customers, technical operations or product management are usually responsible for keeping track of all parts, raw materials and their suppliers so that the required information can be provided. For complex BOMs, it is very tedious to maintain complete, accurate, up-to-date, and traceable data. Any product change or new supplier can, in addition to all other implications, have an effect on the Conflict Minerals compliance status. Influence on product development  It makes sense to consider compliance early in the planning and design of new products. Companies should evaluate which metals are needed or contained in supplier parts and if these could originate from problematic sources. The answer influences the cost and risk analysis during the development. If it is known early on that a part could be non-compliant with respect to Conflict Minerals, alternatives can be evaluated and thus costly changes at a later stage can be avoided. Integrated compliance management  Ideally, compliance data for Conflict Minerals, but also for other regulations like REACH and RoHS, should be managed in an integrated supply chain system. The compliance status is directly visible across the entire BOM at any part level and for the finished product. If data is missing, a request to the supplier can be triggered right away without having to switch to another system. The entire process, from identification of the relevant parts, requesting information, handling responses, data entry, to compliance calculation is fully covered end-to-end while being transparent for all stakeholders. Agile PLM Product Governance and Compliance (PG&C) The PG&C module extends Agile PLM with exactly this integrated functionality. As with the entire Agile product suite, PG&C can be configured according to customer requirements: data fields, attributes, workflows, routing, notifications, and permissions, etc… can be quickly and easily tailored to a customer’s needs. Optionally, external databases can be interfaced to query commercially available sources of Conflict Minerals declarations which obviates the need for a separate supplier request in many cases. Suppliers can access the system directly for data entry through a special portal. The responses to the standard EICC-GeSI questionnaire can be imported by the supplier or internally. Manual data entry is also supported. A set of compliance-specific dashboards and reports complement the functionality Conclusion  The increasing number of product compliance regulations, for which Conflict Minerals is just one example, requires companies to implement an efficient data and process management in this area. Consumer awareness in this matter increases as well so that an integrated system from development to production also provides a competitive advantage. Follow this link to learn more about Agile's PG&C solution

    Read the article

  • Tuple - .NET 4.0 new feature

    - by nmarun
    Something I hit while playing with .net 4.0 – Tuple. MSDN says ‘Provides static methods for creating tuple objects.’ and the example below is: 1: var primes = Tuple.Create(2, 3, 5, 7, 11, 13, 17, 19); Honestly, I’m still not sure with what intention MS provided us with this feature, but the moment I saw this, I said to myself – I could use it instead of anonymous types. In order to put this to test, I created an XML file: 1: <Activities> 2: <Activity id="1" name="Learn Tuples" eventDate="4/1/2010" /> 3: <Activity id="2" name="Finish Project" eventDate="4/29/2010" /> 4: <Activity id="3" name="Attend Birthday" eventDate="4/17/2010" /> 5: <Activity id="4" name="Pay bills" eventDate="4/12/2010" /> 6: </Activities> In my console application, I read this file and let’s say I want to pull all the attributes of the node with id value of 1. Now, I have two ways – either define a class/struct that has these three properties and use in the LINQ query or create an anonymous type on the fly. But if we go the .NET 4.0 way, we can do this using Tuples as well. Let’s see the code I’ve written below: 1: var myActivity = (from activity in loaded.Descendants("Activity") 2:       where (int)activity.Attribute("id") == 1 3:       select Tuple.Create( 4: int.Parse(activity.Attribute("id").Value), 5: activity.Attribute("name").Value, 6: DateTime.Parse(activity.Attribute("eventDate").Value))).FirstOrDefault(); Line 3 is where I’m using a Tuple.Create to define my return type. There are three ‘items’ (that’s what the elements are called) in ‘myActivity’ type.. aptly declared as Item1, Item2, Item3. So there you go, you have another way of creating anonymous types. Just out of curiosity, wanted to see what the type actually looked like. So I did a: 1: Console.WriteLine(myActivity.GetType().FullName); and the return was (formatted for better readability): "System.Tuple`3[                            [System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.DateTime, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]                           ]" The `3 specifies the number of items in the tuple. The other interesting thing about the tuple is that it knows the data type of the elements it’s holding. This is shown in the above snippet and also when you hover over myActivity.Item1, it shows the type as an int, Item2 as string and Item3 as DateTime. So you can safely do: 1: int id = myActivity.Item1; 2: string name = myActivity.Item2; 3: DateTime eventDate = myActivity.Item3; Wow.. all I can say is: HAIL 4.0.. HAIL 4.0.. HAIL 4.0

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • XP Pro product keys

    - by Bill
    I have a very serious problem. After my XP Home OS was trashed by rogue software - a trial of a thing named TuneUp - I did a clean install, including HDD reformat of Windows XP Pro from a purchased OEM disk. This was Service Pack 2, subsequently upgraded to SP3. I had conclusively mislaid the product key. I had to access data on the machine VERY urgently and I did not then know that under some circumstances Microsoft might agree to provide a replacement. I found what I now know must have been a pirate key on the Net which enabled installation but NOT activation. This of course left me functional but 30 days before meltdown - about 20 days left as I write this. Various retailers want around £100 for retail with matching product key. - this would be paying twice over just to continue use on the same computer. I have neither need nor intention of installing XP Pro on any other computer. I have tried a number of applications claiming to deal with this problem but none of them work. A Belarc profile shows that the pirate key has replaced the original one on the system. I have now found two keys, one of which might be the original, but neither work. I am about to upgrade the HDD and it looks like I will just be passing the problem on when I install XP. I have retrieved a key from the disk, but it is seemingly one Microsoft use in production and does not work. It is 76487-OEM-0015242-71798. The keys I have, one of which which might or might not be the original, are CD87T-HFP4G-V7X7H-8VY68-W7D7M and FC8GV-8Y7G7-XKD7P-Y47XF-P829W (or P819W - I believe it to be the latter, but the box will not accept it). The pirate key which has enabled this install and which is now stored on the system but will not activate is QQHHK-T4DKG-74KG7-BQB9G-W47KG. In these circumstances is it likely that Microsoft would issue a replacement? Is there any other solution? I am not trying to defraud anyone, just to keep on using the product I legitimately bought. Bill

    Read the article

  • Basic OpenVPN setup not working

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You EDIT it seems on the server the openvpn-status.log has the routes for the client SERVER OpenVPN CLIENT LIST Updated,Wed May 19 18:26:32 2010 Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since client,192.168.10.102:50517,19157,20208,Wed May 19 17:38:25 2010 ROUTING TABLE Virtual Address,Common Name,Real Address,Last Ref 10.8.0.6,client,192.168.10.102:50517,Wed May 19 17:38:56 2010 GLOBAL STATS Max bcast/mcast queue length,0 END Also this is from the client.log file: Which seems to be correct C:\WINDOWS\system32\route.exe ADD 10.8.0.0 MASK 255.255.255.0 10.8.0.5 Another EDIT 'route print' on the server shows the route: Destination Mask Gateway Interface 10.8.0.0 255.255.255.0 10.8.0.2 10.8.0.1 the same on the client shows 10.8.0.0 255.255.255.0 10.8.0.5 10.8.0.6 So the routes are there.. what can the problem be? Is there anything wrong with my configs? Why would OpenVPN be having problems communicating?

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • NFS4 / ZFS: revert ACL to clean/inherited state

    - by Keiichi
    My problem is identical to this Windows question, but pertains NFS4 (Linux) and the underlying ZFS (OpenIndiana) we are using. We have this ZFS shared via NFS4 and CIFS for Linux and Windows users respectively. It would be nice for both user groups to benefit from ACLs, but the one missing puzzle piece goes thusly: Each user has a home, where he sets a top-level, inherited ACL. He can later on refine permissions for the contained files/folders iteratively. Over time, sometimes permissions need to be generalized again to avoid increasing pollution of ACL entries. You can tweak the ACL of every single file if need be to obtain the wanted permissions, but that defeats the purpose of inherited ACLs. So, how can an ACL be completely cleared like in the question linked above? I have found nothing about what a blank, inherited ACL should look like. This usecase simply does not seem to exist. In fact, the solaris chmod manpage clearly states A- Removes all ACEs for current ACL on file and replaces current ACL with new ACL that represents only the current mode of the file. I.e. we get three new ACL entries filled with stuff representing the permission bits, which is rather useless for cleaning up. If I try to manually remove every ACE, on the last one I get chmod A0- <file> chmod: ERROR: Can't remove all ACL entries from a file Which by the way makes me think: and why not? In fact, I really want the whole file-specific ACL gone. The same holds for linux, which enumerates ACEs starting with 1(!), and verbalizes its woes less diligently nfs4_setacl -x 1 <file> Failed setxattr operation: Unknown error 524 So, what is the idea behind ACLs under Solaris/NFS? Can they never be cleaned up? Why does the recursion option for the ACL setting commands pollute all children instead of setting a single ACL and making the children inherit? Is this really the intention of the designers? I can clean up the ACLs using a windows client perfectly well, but am I supposed to tell the linux users they have to switch OS just to consolidate permissions?

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • FTP script needs blank line

    - by Ones and Zeroes
    I am trying to determine the reason for some FTP servers requiring a blank line in the script as follows: open server.com username ftp_commands bye Refer to blank line required after username credentials. Example from: FTP from batch file another reference to the same: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.as400.misc/2008-05/msg00227.html Also discussed here: archive.midrange.com/midrange-l/200601/msg00048.html "The behavior I'm observing is the same as if I didn't specify the password to login." with an answer referring to our same fix... archive.midrange.com/midrange-l/200601/msg00053.html and archive.midrange.com/midrange-l/200601/msg00065.html Note: It is my experience that FTP questions attract uncouth responses. Admittedly FTP is outdated, but many clients still have legacy systems, which they cannot upgrade or replace. The reason thereof should not be discussed here. The intention of this question is to invite a positive response. Please do not respond if you disagree with the above. If you have never encountered this same issue, please do not respond. I suspect this may be limited to FTP scripts executed from Windows machines, but have been told that this happens often and with many different servers. My specific interest is to understand what may cause this as I have a real world example of a production system suddenly requiring this as a workaround fix, after running for many years without issue. The server belongs to a third party who claims no change on their end. Server details unknown and cannot be determined. Any help or encouragement from someone who has come across the same, would be appreciated. ps. Sorry for the many words and references to painful responses, but I have asked similar questions on serverfault and elsewhere and unfortunately got back kneejerk responses to FTP and respondents debating the validity of the question. I would truly not ask, or re-post this question online if I had a better understanding of the issue. I know of people who have seen this issue, but don't know what causes it. I am wary that this question would again turn into another irrelevant discussion. Please, I ask very nicely: Please do not respond if you have not encountered a similar issue. FURTHER EDIT: Please do not suggest changing the product. The problem is not the blank line requirement. We know this fixes the issue. The problem is not being able to explain the reason for the blank line in the first place. Slight difference, but a critical point to note wrt the answering of this question.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23  | Next Page >