Search Results

Search found 5180 results on 208 pages for 'outside'.

Page 96/208 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Integrating Social, Marketing, and Loyalty to Deliver Great Customer Experiences

    - by Charles Knapp
    Eighty nine percent of consumers quit a brand after one bad experience. With the high cost of acquiring new customers, what can brand leaders do? At the Loyalty World Conference this week in London, global business leaders such as the co-founder of Ben & Jerry’s shared the latest in how to retain customers and boost advocacy. Melissa Boxer and Sundar Swaminathan of Oracle shared that by taking an outside-in approach, you can deliver a differentiated, loyalty-building experience throughout the customer lifecycle, from researching and selecting through to using and recommending. To transform customer experiences, you need to integrate your brand’s social, marketing, and loyalty functions from the commonplace silos. Three key strategies: Know more and understand, unifying and capturing insights across touch points to better understand who to serve, how to serve, and when to serve.  Connect and engage across social and traditional channels, empowering a relationship ecosystem between social communities, customers, and employees. Make the personalized experience easy and rewarding. Visit us on twitter.com/oraclecrm to learn more useful highlights from the #lwconf conference.

    Read the article

  • Should certain math classes be required for a Computer Science degree?

    - by sunpech
    For a Computer Science (CS) degree at many colleges and universities, certain math courses are required: Calculus, Linear Algebra, and Discrete Mathematics are few examples. However, since I've started working in the real world as a software developer, I have yet to truly use some the knowledge I had at once acquired from taking those classes. Discrete Math might be the only exception. My questions: Should these math classes be required to obtain a computer science degree? Or would they be better served as electives? I'm challenging even that the certain math classes even help with required CS classes. For example, I never used linear algebra outside of the math class itself. I hear it's used in Computer Graphics, but I never took those classes-- yet linear algebra was required for a CS degree. I personally think it could be better served as an elective rather than requirement because it's more specific to a branch of CS rather than general CS. From a Slashdot post CS Profs Debate Role of Math In CS Education: 'For too long, we have taught computer science as an academic discipline (as though all of our students will go on to get PhDs and then become CS faculty members) even though for most of us, our students are overwhelmingly seeking careers in which they apply computer science.'

    Read the article

  • Are Get-Set methods a violation of Encapsulation?

    - by Dipan Mehta
    In an Object oriented framework, one believes there must be strict encapsulation. Hence, internal variables are not to be exposed to outside applications. But in many codebases, we see tons of get/set methods which essentially open a formal window to modify internal variables that were originally intended to be strictly prohibited. Isn't it a clear violation of encapsulation? How broadly such a practice is seen and what to do about it? EDIT: I have seen some discussions where there are two opinions in extreme: on one hand people believe that because get/set interface is used to modify any parameter, it does qualifies not be violating encapsulation. On the other hand, there are people who believe it is does violate. Here is my point. Take a case of UDP server, with methods - get_URL(), set_URL(). The URL (to listen to) property is quite a parameter that application needs to be supplied and modified. However, in the same case, if the property like get_byte_buffer_length() and set_byte_buffer_length(), clearly points to values which are quite internal. Won't it imply that it does violate the encapsulation? In fact, even get_byte_buffer_length() which otherwise doesn't modify the object, still misses the point of encapsulation, because, certainly there is an App which knows i have an internal buffer! Tomorrow, if the internal buffer is replaced by something like a *packet_list* the method goes dysfunctional. Is there a universal yes/no towards get set method? Is there any strong guideline that tell programmers (specially the junior ones) as to when does it violate encapsulation and when does it not?

    Read the article

  • Multi MVC processing vs Single MVC process

    - by lordg
    I've worked fairly extensively with the MVC framework cakephp, however I'm finding that I would rather have my pages driven by the multiple MVC than by just one MVC. My reason is primarily to maintain an a more DRY principle. In CakePHP MVC: you call a URL which calls a single MVC, which then calls the layout. What I want is: you call a URL, it processes a layout, which then calls multiple MVC's per component/block of html on the page. When you compare JavaScript components, AJAX, and server side HTML rendering, it seems the most consistent method for building pages is through blocks of components or HTML views. That way, the view block could be situated either on the server or the client. This is technically my ONLY disagreement with the MVC model. Outside of this, IMHO MVC rocks! My question is: What other RAD frameworks follow the same principles as MVC but are driven rather by the View side of MVC? I've looked at Django and Ruby on Rails, yet they seems to be more Controller driven. Lift/Scala appears to be somewhat of a good fit, but i'm interested to see what others exist.

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • Windows Intune, Cloud Desktop management

    - by David Nudelman
    As a part of Microsoft Cloud computing strategy, Windows Intune beta was released today. Here’s a quick overview of what customers and IT consultants can do with the cloud service component of Windows Intune: Manage PCs through web-based console: Windows Intune provides a web-based console for IT to administrate their PCs. Administrators can manage PCs from anywhere. Manage updates: Administrators can centrally manage the deployment of Microsoft updates and service packs to all PCs. Protection from malware: Windows Intune helps protect PCs from the latest threats with malware protection built on the Microsoft Malware Protection Engine that you can manage through the Web-based console. Proactively monitor PCs: Receive alerts on updates and threats so that you can proactively identify and resolve problems with your PCs—before it impacts end users and your business. Provide remote assistance: Resolve PC issues, regardless of where you or your users are located, with remote assistance. Track hardware and software inventory: Track hardware and software assets used in your business to efficiently manage your assets, licenses, and compliance. Set security policies: Centrally manage update, firewall, and malware protection policies, even on remote machines outside the corporate network. And here a quick video about Windows Intune For support and questions go to : TechNet Forums for Intune Regards, David Nudelman

    Read the article

  • When is type testing OK?

    - by svidgen
    Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB. Wonderful. But, we're still given is a-like operations in most, if not all, type-safe languages. And that seems suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • TechEd 2012 - last session

    - by Stefan Barrett
    Nearly over. For last session attending a talk on c++ apps in windows metro. When I came to TechEd I didn't think I would attend so many sessions on c++, but somehow this has proved more interesting this year. While .net 4.5 is interesting, been playing with it for a while now, so not a ton of surprises. Of course, i still want it at work, but who knows how long that will take - so will just have to use it at home. Once I get the licensing sorted out. So expensive. I've been pleasantly surprised with windows 8, and will be trying that out, and at least that is covered on technet. While the weather has not always been perfect during TechEd's, this is the worst I've seen it so far - it's wet outside. Next years conference in new Orleans should be interesting, well out of the conference itself that is. I do like conferences where it's held within the city itself, unlike Orlando where there isn't really anything here.

    Read the article

  • Laptop shuts down upon waking from suspend

    - by Bryan Head
    The computer enters suspend either by closing lid, choosing suspend from the top-right drop down, or hitting the power button and pressing suspend. It doesn't matter. I then attempt to wake the computer either by opening the lid (if it was closed) or hitting the power button. Again, doesn't matter. The computer will then immediately shutdown about 50% of the time. It seems to be more likely to shut down the longer it has been on suspend. I took a snapshot of /var/log/pm-suspend.log after a successfully resume and a shutdown. The only difference (outside of timestamps of course) was that a successful resume, after reporting the success of various suspend hooks, writes: Thu Jul 5 21:36:45 PDT 2012: performing suspend Thu Jul 5 21:37:10 PDT 2012: Awake. Thu Jul 5 21:37:10 PDT 2012: Running hooks for resume and then reports successful resume hooks. When it shuts down, the log ends at "performing suspend". I diffed the two files so I know this is the only difference. Thus, it looks like it's not even trying to wake up. Would love some ideas on this one. I've scoured the web but can't seem to find anyone else running into the same issue (it seems more common that the computer shuts down upon entering suspend, or only on hitting the power button to wake, and haven't seen any that are random like mine). I'll update with any requested information.

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • CSO Summit @ Executive Edge

    - by Naresh Persaud
    If you are attending the Executive Edge at Open World be sure to check out the sessions at the Chief Security Officer Summit. Former Sr. Counsel for the National Security Agency, Joel Brenner ,  will be speaking about his new book "America the Vulnerable". In addition, PWC will present a panel discussion on "Crisis Management to Business Advantage: Security Leadership". See below for the complete agenda. TUESDAY, October 2, 2012 Chief Security Officer Summit Welcome Dave Profozich, Group Vice President, Oracle 10:00 a.m.–10:15 a.m. America the Vulnerable Joel Brenner, former Senior Counsel, National Security Agency 10:15 a.m.–11:00 a.m. The Threats are Outside, the Risks are Inside Sonny Singh, Senior Vice President, Oracle 11:00 a.m.–11:20 a.m. From Crisis Management to Business Advantage: Security Leadership Moderator: David Burg, Partner, Forensic Technology Solutions, PwC Panelists: Charles Beard, CIO and GM of Cyber Security, SAIC Jim Doggett, Chief Information Technology Risk Officer, Kaiser Permanente Chris Gavin, Vice President, Information Security, Oracle John Woods, Partner, Hunton & Williams 11:20 a.m.–12:20 p.m. Lunch Union Square Tent 12:20 p.m.–1:30 p.m. Securing the New Digital Experience Amit Jasuja, Senior Vice President, Identity Management and Security, Oracle 1:30 p.m.–2:00 p.m. Securing Data at the Source Vipin Samar, Vice President, Database Security, Oracle 2:00 p.m.–2:30 p.m. Security from the Chairman’s Perspective Jeff Henley, Chairman of the Board, Oracle Dave Profozich, Group Vice President, Oracle 2:30 p.m.–3:00 p.m.

    Read the article

  • How to enable services Discovery API in GoogleCL?

    - by Marcos
    There are bits and pieces of information all over the place but I'm trying to put it all together so that GoogleCL finally accesses more than the initial 7 services. Does anyone know of a step-by-step? Right now any attempt outside these result in the error message: google tasks list Did you specify the service correctly? Must be one of 'picasa', 'blogger', 'youtube', 'docs', 'contacts', 'calendar', 'finance' I installed GoogleCL from the Ubuntu repos, authenticated a few bundled services like contacts, docs etc. and those work great, giving me access to do certain operations like upload from the command line. I would really like to get it going to support tasks and all the other elegible Google services shown at https://code.google.com/apis/explorer/#_s=tasks Here are some guides/partial steps I've found: http://code.google.com/p/googlecl/wiki/DiscoveryManual (indicates needing to check it out updated GoogleCL from the subversion repository.) http://code.google.com/p/google-api-python-client/wiki/Installation easy_install --upgrade google-api-python-client http://code.google.com/p/googlecl/wiki/Install http://code.google.com/p/googlecl/source/checkout sudo -i cd /usr/local/src/ svn checkout http://googlecl.googlecode.com/svn/trunk/ googlecl-read-only cat googlecl-read-only/INSTALL.txt cd /usr/local/src/googlecl-read-only/ python setup.py install Result: $ google discovery list Traceback (most recent call last): File "/usr/bin/google", line 488, in run_interactive run_once(options, args) File "/usr/bin/google", line 540, in run_once options.config) File "/usr/bin/google", line 364, in import_service force_gdata_v1 = config.lazy_get(package.SECTION_HEADER, AttributeError: 'module' object has no attribute 'SECTION_HEADER'

    Read the article

  • sudo dhclient eth0 | sudo: unable to resolve host ubuntu

    - by Merianos Nikos
    I have a computer of a friend of mine, that runs Ubuntu (I don't know what version, due to the current system status) and while he was updating the kernel, he reboot the computer (yes that could be happen !!, anyway) Currently I am trying to recover the system by using a live USB, with Ubuntu installed on it. What I am doing, is the following: Update Failure The problem is that when I try to execute the fifth step, I am getting error because I do not have Internet access. The computer is properly wired on my rooter, and I have Internet access in any place apart of the shell. This message for example is send it via the live USB. but I cannot access the Internet via the shell. In my shell I try to use this command: sudo dhclient eth0 but the result of this command is the following message sudo: unable to resolve host ubuntu My hosts file has the following content: 127.0.0.1 localhost 127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts how can I get connected on the Internet, in order to download the appropriate updates ? UPDATE 1 I just notice, that when I execute the ifconfig I am getting the following warning: Warning: cannot open /proc/net/dev (No such file or directory). Limited output. UPDATE 2 I just found that, and looks like solving the problem with dhclient eth0 command, but still I cannot ping Google UPDATE 3 Now the sudo dhclient eth0 returns the following message: RTNETLINK answers: File exists UPDATE 4 I just ping my rooter and I getting response, so, it is looks like I cannot ping outside the rooter (ie. Google) Kind regards ...

    Read the article

  • Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility

    - by Jason Fitzpatrick
    If the excitement you felt about having a DSLR capable of shooting video wore off the second you took it outside and realized you needed an expensive add-on viewfinder to use it in sunlight, this cheap DIY viewfinder is for you. The digital video capabilities of new DSLR cameras are amazing and changing the way people interact with movie production. What’s not awesome, however, is how the LCD screen gets completely washed out in bright conditions and you almost always have to buy a $50+ aftermarket accessory to make the LCD functional under those conditions. Courtesty of the Frugal Film Maker we have the following video tutorial showing us how to turn a plastic container, a cheap dollar-store magnifying glass, a headphone ear cover, and some glue and hair ties into a dirt cheap LCD viewfinder. You’ll never have to squint or miss a shot because of bright lighting conditions again–even better yet, you’ll only spend a few bucks for the whole project. For step by step instructions in print form, hit up the link below. Homemade DSLR Viewfinder [Instructables via Make] Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • Interfaces, Adapters, exposing business objects via WCF design

    - by Onam
    I know there have been countless discussions about this but I think this question is slightly different and may perhaps prompt a heated discussion (lets keep it friendly). The scene: I am developing a system as a means for me to learn various concepts and I came across a predicament which my brain is conflicting with. That is whether to keep my interfaces in a separate class library or should they live side by side my business objects. I want to expose certain objects via WCF, however refuse to expose them in its entirety. I am sure most will agree exposing properties such as IDs and other properties is not good practice but also I don't want to have my business objects decorated with attributes. The question: Essentially, I'll be having a separate interface for each of my objects that will essentially be exposed to the outside world (could end up being quite a few) so does it make sense to create a separate class library for interfaces? This also brings up the question of whether adapters should live in a separate class library too as ideally I want a mechanism from transferring from one object to the other and vice versa?

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • Read Wall Street Journal for free using Google Search

    - by Gopinath
    Wall Street Journal publishes very informative articles and most of them are available for paid subscribers. But it is very easy to defeat the pay wall of Wall Street Journal and read the articles for free with the help of Google. The trick is to Google for Wall Street Journal article and click the link displayed on search results to read the article for free. It’s very simple and easy if you are using Google Chrome browser, but it should be straight forward in Firefox and Internet Explorer. Here is a walk through of unlocking today’s Wall Street Journal paid article   1. When you are on online.wsj.com, select the title of subscribers only article you want to read 2. Right on the selected title and choose the option Search Google for “<article title>” 3. Locate the WSJ article on Google search results and click on the article link   4. Boom! You got full access to the article and enjoy reading it for free.   I’ve been using this trick for a while from US to access WSJ articles for free. Most likely this should work for users located outside USA as well.

    Read the article

  • lucid 10.04 LTS => Precise 12.04.1 : upgrade doesn't work

    - by Rastom
    I googled and looked into all unkown issues on ubuntu forums but I can't figure out why a 10.04 LTS server won't detect the last LTS 12.04.1. I guess since 12.04 is a fresh dist, not much is reported for related issues Here is what I did : apt-get update apt-get upgrade apt-get install update-manager-core it was already installed so no update for this package. I checked : /etc/update-manager/release-upgrades [DEFAULT] # Default prompting behavior, valid options: # # never - Never check for a new release. # normal - Check to see if a new release is available. If more than one new # release is found, the release upgrader will attempt to upgrade to # the release that immediately succeeds the currently-running # release. # lts - Check to see if a new LTS release is available. The upgrader # will attempt to upgrade to the first LTS release available after # the currently-running one. Note that this option should not be # used if the currently-running release is not itself an LTS # release, since in that case the upgrader won't be able to # determine if a newer release is available. Prompt=lts I also checked my sourcelist before running apt-get : /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe deb-src http://security.ubuntu.com/ubuntu lucid-security universe deb http://security.ubuntu.com/ubuntu lucid-security multiverse deb-src http://security.ubuntu.com/ubuntu lucid-security multiverse # deb http://landscape.canonical.com/packages/hardy ./ # deb-src http://landscape.canonical.com/packages/hardy ./ and then following Ubuntu guide for Precise upgrade the command below should work : root@xxxxxxxxx:/etc/apt# do-release-upgrade -d Checking for a new ubuntu release No new release found So am I missing something ? The server was accessing outside through a proxy but I grant direct access to this server to avoid any Internet access problem or redirection but no clue... Any help would be appreciated

    Read the article

  • Java issues on OpenVZ Ubuntu 11.04 (.jar/.sh files)

    - by IWillNotChange
    I've had a whole line of messes with java and .jar files. I've tried both OpenJDK (from software installer) and about three repositories for Sun. /Desktop# java -jar -Xmx1024m ss.jar Exception in thread "main" java.awt.HeadlessException at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:173) at java.awt.Window.<init>(Window.java:476) at java.awt.Frame.<init>(Frame.java:419) at java.awt.Frame.<init>(Frame.java:384) at javax.swing.JFrame.<init>(JFrame.java:174) at org.powerbot.bd.<init>(Unknown Source) at org.powerbot.Boot.main(Unknown Source) Two separate errors: ~/Desktop# ./ss.sh [SEVERE] org.server.Boot: Default heap size of 490m too small, restarting with 768m and about 30 different crashes were it just "aborts" with a huge file dump. Each time I've tried something a little different, whether it be updating Java or just changing -Xmx1024 to -Xmx1024m to get rid of the heap. Personally I think it has something to do with OpenVZ, but Google hasn't saved me this time, I need someone who can get to the bottom of my problem. java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) is my current install. Running ss.sh gives me: (I'd post the entire log but its long) # # A fatal error has been detected by the Java Runtime Environment: # # SIGILL (0x4) at pc=0x00002b14278e6fa0, pid=9301, tid=47365590714112 # # JRE version: 6.0_26-b03 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.1-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [ld-linux-x86-64.so.2+0x14fa0] _dl_make_stack_executable+0x2b50 # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # I'm willing to let someone who knows what they are talking about view it and try and sort this out. Any help would be appreciated, I've about pulled all my hair Googling to no avail.

    Read the article

  • SharePoint Web Part Constructor Fires Twice When Adding it to the Page (and has a different security

    - by Damon
    We had some exciting times debugging an interesting issue with SharePoint 2007 Web Parts.  We had some code in staging that had been running just fine for weeks and had not been touched or changed in about the same amount of time.  However, when we tried to move the web part into a different staging environment, the part started throwing a security exception when we tried to add it to a page.  After a bit of debugging, we determined that the web part was throwing the exception while trying to access the SPGroups property on the SharePoint site.  This was pretty strange because we were logged in as an admin and the code was working perfectly fine before.  During the debugging process, however, we found out that the web part constructor was being fired twice.  On one request, the security context did not seem to have everything it needed in order to run.  On the other request, the security context was populated with the user context with the user making the request (like it normally is).  Moving the security code outside of the constructor seems to have fixed the issue. Why the discrepancy between the two staging environments?  Turns out we deployed the part originally, then deployed an update with the security code.  Since the part was never "added" to the page after the code updates were made (we just deployed a new assembly to make the updates), we never saw the problem.  It seems as though the constructor fires twice when you are adding the web part to the page, and when you run the web part from the web part gallery.  My only thought on why this would occur is that SharePoint is instantiating an instance to get some information from it - which is odd because you would think that would happen with reflection without requiring a new object.  Anyway, the work around is to just not put anything security related inside the constructor, or to do a good job accounting for the possibility of the security context not being present if you are adding the item to the page. Technorati Tags: SharePoint,.NET,Microsoft,ASP.NET

    Read the article

  • XNA and C# vs the 360's in order processor

    - by Richard Fabian
    Having read this rant, I feel that they're probably right (seeing as the Xenon and the CELL BE are both highly sensitive to branchy and cache ignorant code), but I've heard that people are making games fine with C# too. So, is it true that it's impossible to do anything professional with XNA, or was the poster just missing what makes XNA the killer API for game development? By professional, I don't mean in the sense that you can make money from it, but more in the sense that the more professional games have phsyics models and animation systems that seem outside the reach of a language with such definite barriers to the intrinsics. If I wanted to write a collision system or fluid dynamics engine for my game, I don't feel like C# offers me the chance to do that as the runtime code generator and the lack of intrinsics gets in the way of performance. However, many people seem to be fine working within these constraints, making their successful games but seemingly avoiding any of the problems by omission. I've not noticed any XNA games do anything complex other than what's already provided by the libraries, or handled by shaders. Is this avoidance of the more complex game dynamics because of teh limitations of C#, or just people concentrating on getting it done? In all honesty, I can't believe that AI war can maintain that many units of AI without some data oriented approach, or some dirty C# hacks to make it run better than the standard approach, and I guess that's partly my question too, how have people hacked C# so it's able to do what games coders do naturally with C++?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >