Search Results

Search found 39497 results on 1580 pages for 'two shoes'.

Page 860/1580 | < Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >

  • samba share not on network after upgrading to Ubuntu 12.04LTS.

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • Tips for switching jobs and moving into web based programming?

    - by JerryC
    I graduated in 2006 with a computer science degree and got solid grades (3.5 overall 3.8 in my major) For the past 4.5 years I've been working as a Software Engineer doing primarily rich client development. Most of my experience is with Java, Swing and C++. I've done a lot of network programming and I have acquired some skill working & debugging in distributed environments. I would like to switch jobs and move into a role where I can get exposure to some new technologies and frameworks. I would like to move into a more web development role but I find my lack of web development experience is hurting me. 90% of the jobs I see advertised are looking for one of two skill sets: 1) Stereotypical server side Java web developer. Experience with Spring, Hibernate, J2EE, etc. 2) Stereotypical front end web developer. Experience with Javascript, jQuery, HTML5, GWT, CSS, etc I find most of these companies are looking really specifically for this experience and they are not willing to take on good programmers/ CS fundamental guys who lack experience with this stuff. I would love to get a job doing stuff like this, but have my skills become out of date and unmarketable? Any opinions on ways to sell myself to help get a new position?

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Moodle 2 pages loading up to 2000% faster

    - by TJ
    On average our Moodle 2 pages were loading in 2.8 seconds, now they load in as little as 0.12 seconds, so that’s like 2333.333% faster!What was it I hear you say?Well it was the database connection, or more correctly the database library. I was using FreeTDS http://docs.moodle.org/22/en/Installing_MSSQL_for_PHP, but now I’m using the new Microsoft Drivers 3.0 for PHP for SQL Server http://www.microsoft.com/en-us/download/details.aspx?id=20098. I’m in a Windows Server IT department, and in both our live and development environments, we have Moodle 2.2.3, IIS 7.5, and PHP 5.3.10 running on two Windows Server 2008 R2 servers and using MS Network Load Balancing.Since moving to Moodle 2, the pages have always loaded much more slowly than they did in Moodle 1.9, I’ve been chasing this issue for quite a while. I had previously tried the Microsoft Drivers for PHP for SQL Server 2.0, but my testing showed it was slower than the FreeTDS driver.Then yesterday I found Microsoft had released the new version, Microsoft Drivers 3.0 for PHP for SQL Server, so I thought I’d give it a run, and wow what a difference it made.Pages that were loading in 2.8 seconds, now they load in as little as 0.12 seconds, 2333.333% faster…I have more testing to do, but so far it’s looking good, I have scheduled multi user load testing for early next week (fingers crossed).To make the change all I need to do was,download the driverscopy the relevant files to PHP\ext (for us they were php_pdo_sqlsrv_53_nts.dll and php_sqlsrv_53_nts.dll) install the Microsoft SQL Server 2012 Native Client x64 http://www.microsoft.com/en-us/download/details.aspx?id=29065 add to PHP.ini, extension=php_pdo_sqlsrv_53_nts.dll, extension=php_sqlsrv_53_nts.dllremove form PHP.ini, extension=php_dblib.dllvchange in PHP.ini, mssql.textlimit = 20971520 and mssql.textsize = 20971520change Moodle config.php, $CFG->dbtype = 'sqlsrv'; and 'dbpersist' => Trueand then reboot and test…I've browsed courses, backed up/restored some really large and complicated courses, deleted courses etc. etc. all good.Still more testing to do but, hey this is good start...Hope this helps anyone experiencing the same slowness…

    Read the article

  • Did 12.04 just add multi-touch gesture support mid-release?

    - by adempewolff
    I was reviewing the updates I was about to download today and I noticed that a lot of them had to do with gesture support, noticed that many of these were new installs rather than upgrades. Has 12.04 just added multi-touch gesture support mid-release? If so, what are the capabilities that this adds? Which applications already support these capabilities and can I expect others to add support in the near future? Here are the packages that were installed: Install: libframe6:amd64 (2.2.4-0ubuntu0.12.04.1), libgeis1:amd64 (2.2.9.2-0ubuntu1), libgrail5:amd64 (3.0.6-0ubuntu0.12.04.01, automatic) And here are those that were upgraded (also including many with touch support): Upgrade: libgrip0:amd64 (0.3.4-0ubuntu2~ubuntu12.04.1, 0.3.5-0ubuntu1~12.04.1), eog:amd64 (3.4.2-0ubuntu1, 3.4.2-0ubuntu1.1), ginn:amd64 (0.2.4-0ubuntu1, 0.2.4.1-0ubuntu1) Of which the descriptions for the new installs are, libgeis1: Gesture engine interface support A common API for clients of a systemwide gesture recognition and propagation engine. libframe6: Touch Frame Library This library handles the buildup and synchronization of a set of simultaneous touches. The library is input agnostic, with bindings for mtdev, frame and XI2.1. libgrail5: Gesture Recognition And Instantiation Library This library consists of an interface and tools for handling gesture recognition and gesture instantiation. Applications can use the grail callbacks to receive gesture primitives and raw input events from the underlying kernel device. And the descriptions for the upgraded packages are, ligrip0: provides multitouch gestures to GTK+ apps Libgrip hooks gesture recognition into GTK+ applications. ginn: Gesture Injector: No-GEIS, No-Toolkits A daemon with jinn-like wish-granting capabilities: it gives applications the ability to support a subset of multi-touch gestures without having to integrate GEIS or multi-touch GTK/Qt libs. Adding in a ton of new libraries and upgrading the existing components makes me wonder if 12.04 is meant to start natively supporting gestures other than two finger scroll in the near future. I expected these capabilities to be introduced soon but I thought that they would only be rolled out in a new release, not as upgrades for an existing release. Anyone have any info about this?

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-20

    - by Bob Rhubart
    Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00am – 5:00pm Sofitel Los Angeles8555 Beverly BoulevardLos Angeles, CA 90048 Loving VirtualBox 4.2… | The ORACLE-BASE Blog Is it wrong for a man to love a technology? Oracle ACE Director Tim Hall has several very good reasons for his feelings… Running RichFaces on WebLogic 12c | Markus Eisele "With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it," says Oracle ACE Director Markus Eisele. His detailed post will help you to meet that challenge. Oracle ADF Coverage at OOW | Frank Nimphius Frank Nimphius shares a comprehensive and well-organized list of Oracle ADF sessions and activities scheduled for Oracle OpenWorld in San Francisco. OIM 11g R2 Catalog Customization Example | Daniel Gralewski Oracle Fusion Middleware A-Team member Daniel Gralewski's post shows "how OIM catalog can be customized by using OIM UI capabilities such as managed beans and EL expressions. The post first describes the use case and the solution to address the use case; then it describes the solution details as well as provides links to the artifacts." New Book: Oracle BPM Suite 11g: Advanced BPMN Topics | Mark Nelson Redstack blogger Mark Nelson shares an overview of Oracle BPM Suite 11g: Advanced BPMN Topics, the new book he co-authored with Tanya Williams. Nelson describes the book as "a concise presentation of both theory and practical examples of the areas of BPMN where we have encountered the most widespread confusion and misunderstanding." Thought for the Day "I strive for an architecture from which nothing can be taken away." — Helmut Jahn Source: Brainy Quote

    Read the article

  • Are you at Super Computing 10?

    - by Daniel Moth
    Like last year, I was going to attend SC this year, but other events are unfortunately keeping me here in Seattle next week. If you are going to be in New Orleans, have fun and be sure not to miss out on the following two opportunities. MPI Debugging UX Study Throughout the week, my team is conducting 90-minute studies on debugging MPI applications within Visual Studio. In exchange for your feedback (under NDA) you will receive a Microsoft Gratuity (and the knowledge that you are impacting the development of Visual Studio). If you are interested, sign up at the Microsoft Information Desk in the Exhibitor Hall during exhibit hours. Outside of exhibit hours, send email to [email protected]. If you took part in the GPGPU study, this is very similar except it is for MPI. Microsoft High Performance Computing Summit On Monday 15th, the Microsoft annual user group meeting takes place. Shuttle transportation and lunch is provided. For full details of this event and to register, please visit the official event page. Comments about this post welcome at the original blog.

    Read the article

  • Is there any evidence that drugs can actually help programmers produce "better" code? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Also a quote from that article: He was hardly alone among computer scientists in his appreciation of hallucinogenics and their capacity to liberate human thought from the prison of the mind. Now I'm wondering if there's any evidence to support the theory that drugs can help make a "better" programmer. Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a well-known programming concept or piece of code which originated from people who were on drugs? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • Should I go with OpenGL to see my future in Game Development industry? [closed]

    - by Priyank
    Possible Duplicate: Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry? I tried Google but found quite old articles, so I am in search of an answer in context to year 2012. Hi all, I don't know if you will consider this question appropriate for this community but I am constantly searching for a perfect answer. What I have seen is that most of the games that are released these days are DirectX 1x based. Except for few games like Starcraft or Diablo which don't have high end graphics are using OpenGL. So I have few questions to ask. The platforms i would like to target are PC (windows), Xbox 360 and PS3 (must). Should I go with learning OpenGL to see my future in game development industry? Or should I shift to Directx? If I learn OpenGL first, will it be difficult to learn direcx then? Which API is most suitable for indie development? Which one of the two API's are better from coder's (programmer's) point of view? Like OOP and style of coding. Is openGL being cross platform should be the only reason to choose it over Directx? Even when vendors are not providing enough stable drivers for it. Thanks in advance. I have read this post, but I have few questions. Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry?

    Read the article

  • Finding terms surrounding a trending hashtag?

    - by aendrew
    I'm looking for a way to find "sub-trends", or words that are trending beneath a larger trend. For instance, say "#foo" is the hashtag for a conference. Searching for "#foo" only gives you a general overview of what people are talking about -- if "#foo" moves too quickly, it becomes really difficult to track disparite conversations at #foo. If "#bar" and "#abc" are two different sessions at "#foo", one can find more specific information by searching for "#foo #bar" or "#foo #abc"; yet, how would one find out about the existence of these surrounding hashtags, i.e., sub-trends? If you look at the screenshot for Peoplebrowsr, there's a panel that looks for "words surrounding [trend]," which seems to be exactly what I'm looking for. Is there a way to accomplish this more simply, i.e., without paying $149 /mo. for Peoplebrowsr? Thanks! Update: Another service that can do this is Twazzup (click for example). The "Community" panel has some limited info on surrounding words; is there a tool that does this, but with more detail?

    Read the article

  • enabling a user (created with adduser command) for lightdm graphical login

    - by Basile Starynkevitch
    I just installed Ubuntu 12.04 AMD64 on a new (empty) hard disk (because the previous crashed) Since I am quite familiar with Debian, I created two accounts with the adduser command. Since I am also having an NFSv3 file system, I explictly gave user ids when creating them (for simplicity, I keep the same user id on the home server, running Debian; the user names contain digits; I'm not using LDAP), e.g. # grep bethy /etc/passwd bethy46:x:501:501:Bethy XXX,,,06123456:/home/bethy:/bin/bash # grep bethy /etc/group bethy64:x:501: # grep bethy /etc/shadow bethy46:$6$vQ-wmuchmorethings-2o/:15479:0:99999:7:: Of course /home/bethy exists The actual user name is slightly different, and I am not showing the real entries (for obvious privacy reasons) However, these users don't appear at graphical login prompt (lightdm). And they exist in the system, they have entries in /etc/passwd & /etc/shadow and I (partly) restored their /home I've got no specific user config under /etc/lightdm ; file /etc/lightdm/users.conf mentions # NOTE: If you have AccountsService installed on your system, then LightDM # will use this instead and these settings will be ignored but I have no idea of how to deal with AccountsService thru the command line As you probably guessed, I really dislike doing administrative tasks thru a graphical interface; I much prefer the command line What did I do wrong? How can a user entry not appear in lightdm graphical login? (I need to have my wife's user entry apparent for graphical login). I am not asking how to hide a user, but how to show it in lightdm graphical prompt work-around As I have been told in comments by Nirmik and by Enzotib, lightdm probably don't show any users of uid less than 1024. So I changed all the uid to be more than 8200 (including on the Debian NFS server) and this made all the users visible at the graphical prompt. It is a pain that such a threshold is not really documented.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-01

    - by Bob Rhubart
    Complexity of Social Computing - Is it a Consideration for EA's? | Pat Shepherd blogs.oracle.com Pat Shepherd asks, "Does Enterprise Architecture need to consider Social Computing in its scope?" Who should own the Enterprise Architecture? | Michael Glas blogs.oracle.com "Instead of looking at just who owns the architecture," suggests Michael Glas, "think about what the person/role/organization should do." The Application Architecture Domain | Michael Glas blogs.oracle.com Michael Glas asks—and answers: "As an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?" CAP Twelve Years Later: How the "Rules" Have Changed | Eric Brewer www.infoq.com The CAP theorem asserts that any net­worked shared-data system can have only two of three desirable properties. How­ever, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. Oracle DB with OEM in Amazon Cloud | Dr. Frank Munz www.munzandmore.com Dr. Frank Munz shares a video that screencast that explains "how to create an Oracle DB instance in AWS, how to enable OEM...and how to connect to your cloud instance with a local installation of NetBeans." Sample External Login.jsp page for Oracle Access Manager 11g | Brian Eidelman fusionsecurity.blogspot.com A-Team blogger Brian Eidelman expands on a previous post dealing with configuring OAM 11g to use an externally hosted custom login page. Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 coherence.oracle.com Date: Thursday, June 7, 2012 Time: 5:30pm – 9:00pm PT Where: Oracle Conference Center Room # 103 350 Oracle Parkway Redwood, Shores, CA Presentations: 6:00 p.m. - Coherence 101, The Evolution of Distributed Caching - Noah Arliss (Oracle) 7:00 p.m. - Optimizing Performance for Oracle Coherence and TopLink Grid at OOCL - Matt Rosen, Leo Limqueco (OOCL) 8:00 p.m. - Oracle Coherence Message Bus - Extreme Performance on Oracle Exalogic - Ballav Bihani (Oracle) Thought for the Day "I can't be left unsupervised." — Ron Wood (Born 06/01/1947 Source: Brainy Quote

    Read the article

  • Four Proven Advantages of Online Learning | Outside Cost, Accessibility or Flexibility

    - by Mohit Phogat
    Coursera believes that online courses complement and supplement traditional education (versus a common misconception online will “replace” traditional.) Our research shows that Coursera’s platform, when used concurrently with a traditional classroom setup, is ideal for “blended learning” (i.e., students watch lectures pre-class, then class-time focuses on interactive work and discussion.) Additionally, we agree with Brad Zomick of SkilledUp—an online learning aggregator—who acknowledges an online course “isn’t an alternative at all but rather a different path with its own rewards.” The advantages of Coursera and our apps for mobile were straightforward and conspicuous from the start: we’re free, open, and flexible to learners’ unique needs and style. Over the past two years, however, the evidence proves there are many more tangible benefits to open, online learning. In SkilledUp’s “The Advantages of Online Courses [Infographic]”–crafted from findings of leading educational research–four observations stand out from the overt characteristics: Speedier Learning - “Research shows that online students achieve same or better learning results in about half the time as those in traditional courses” More Active, Engaged & Motivated - Learners thrive “when working with coursework that is challenging but within their capacity to master.” Tangible Skill Building - with an “improved attitude toward learning” Better Teaching Quality - Courses are taught by experts, with various multimedia and cutting-edge technology, and “are usually better organized than traditional courses” This is only the beginning, Courserians! Everyday we hear your incredible stories on how open online courses enrich your lives and enhance your careers. Meanwhile we study the steady stream of scientific, big-data research proving their worth on a large-scale (such as UPenn’s latest research on the welcomed diversity in Coursera-hosted Wharton MBA courses.) Our motto “Learning without Limits” reminds us that open, online courses give tremendous opportunity to those that might not otherwise have access (or time, or money) to study at a high-caliber institution. Source: Coursera

    Read the article

  • How can I re-enable the typing break in 11.10?

    - by Hamish Downer
    I've just upgraded to beta 2 of Oneiric/11.10 and the typing break has gone. I've gone into the system settings and looked in "Keyboard Layout" and "Keyboard" and can't find anything. Has it just been dropped? Is there some hidden way to re-enable it? Update: Thought I'd write an update based on some stuff that has happened since this question (and the two answers) were written. Workrave has now been re-instated in oneiric-backports and for 12.04 (how to enable backports). It works fine, though if you want to put it in your systray then you need to allow it in there. The easy/lazy command line way to allow workrave into the notification area is to do something like: gsettings set com.canonical.Unity.Panel systray-whitelist "['all']" But read this question if you want a more detailed explanation about what you're doing here. Meanwhile the Gnome typing break has been split out into an app called DrWright, however it has not (at time of writing) been packaged for 11.10 (or later). And as mentioned in the other answer, another option is RSIBreak. It is a KDE app but works fine in Unity.

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Do you tend to write your own name or your company name in your code?

    - by Connell Watkins
    I've been working on various projects at home and at work, and over the years I've developed two main APIs that I use in almost all AJAX based websites. I've compiled both of these into DLLs and called the namespaces Connell.Database and Connell.Json. My boss recently saw these namespaces in a software documentation for a project for the company and said I shouldn't be using my own name in the code. (But it's my code!) One thing to bear in mind is that we're not a software company. We're an IT support company, and I'm the only full-time software developer here, so there's not really any procedures on how we should write software in the company. Another thing to bear in mind is that I do intend on one day releasing these DLLs as open-source projects. How do other developers group their namespaces within their company? Does anyone use the same class libraries in personal and in work projects? Also does this work the other way round? If I write a class library entirely at work, who owns that code? If I've seen the library through from start to finish, designed it and programmed it. Can I use that for another project at home? Thanks, Update I've spoken to my boss about this issue and he agrees that they're my objects and he's fine for me to open-source them. Before this conversation I started changing the objects anyway, which was actually quite productive and the code now suits this specific project more-so than it did previously. But thank you to everyone involved for a very interesting debate. I hope all this text isn't wasted and someone learns from it. I certainly did. Cheers,

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • New Content: Customer Engagement & Oracle OpenWorld Preview

    - by user462779
    Two new bits of content available on Profit Online: In A Cross-Channel Approach to Consumer Engagement, Cassandra Moren, senior director of consumer goods industry marketing at Oracle, shares her thoughts on how consumer goods manufacturers are reaping benefits from developing a direct relationship with customers: "Consumer goods manufacturers are starting to adapt in ways that mirror retailers. They are making investments in innovative technologies and processes to build the infrastructure to support the market demand. With advances in aspects like social networking, digital marketing and mobility fundamentally changing the way consumers behave, the door has opened to building a more direct relationship with their customers." We've also published a Special Report on Oracle OpenWorld that gives a great overview of recommendations for must-see sessions and insider advice from experienced attendees. For example, this top from John Matelski, newly elected president of the Independent Oracle Users Group: “Based on developments of the last 12 months, I think big data is definitely going to be hot. The challenges and opportunities of data governance will be another biggie. And there will obviously be a big emphasis on Oracle Exadata and the other Oracle Engineered Systems, with more than 100 sessions.” More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • Automated texture mapping

    - by brandon
    I have a set of seamless tiling textures. I want to be able to take an arbitrary model and create a UV map with these properties: No stretching (all textures tile appropriately so there is no stretching and sheering of the texture) The textures display on the correct axis relative to the model it's mapping to (if you look at the example, you can see some of the letters on the front are tilted, the y axis of the texture should be matching up with the y axis of the object. Some other faces have upside down letters too) the texture is as continuous as possible on the surface of the model (if two faces are adjacent, the texture continues on the adjacent face where it left off) the model is closed (all faces are completely enclosed by other faces) A few notes. This mapping will occur before triangulation. I realize there are ways to do this by hand and it's probably a hard problem to automatically map textures in general, but since these textures are seamless and I just need uniform coverage it seems like an easier problem. I'm looking for an algorithmic approach to this that I can apply in general, not a tool that does it. What approach would work for this, is there an existing one? (I assume so)

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • Meaning of offset in pygame Mask.overlap methods

    - by Alan
    I have a situation in which two rectangles collide, and I have to detect how much did they collide so so I can redraw the objects in a way that they are only touching each others edges. It's a situation in which a moving ball should hit a completely unmovable wall and instantly stop moving. Since the ball sometimes moves multiple pixels per screen refresh, it it possible that it enters the wall with more that half its surface when the collision is detected, in which case i want to shift it position back to the point where it only touches the edges of the wall. Here is the conceptual image it: I decided to implement this with masks, and thought that i could supply the masks of both objects (wall and ball) and get the surface (as a square) of their intersection. However, there is also the offset parameter which i don't understand. Here are the docs for the method: Mask.overlap Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap. Mask.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : :

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

< Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >