Search Results

Search found 12846 results on 514 pages for 'ghost answer'.

Page 249/514 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • How Do Nautilus Album Art Thumbnails work?

    - by Amir Adar
    There's something for which I've been searching an answer for a while now, but to no avail, and it's strange to me, as it seems like a thing that people would talk about: one of those nice little nonsense that enhance the computing experience a little bit. Anyway. I have a fair music collection. I save all the songs as ogg files. All is fine, and I can listen to the files, but there's something weird with the files in Nautilus: some have icons displaying their album art, while others don't, and I just can't understand WHY. I read on this site today that it's a matter of embedding the album art to the file, but that's not true, as I embedded the album art to the files I wanted several times, to no avail. Furthermore, removing an embedded album art from a file didn't have any effect on those that ARE displaying the icons. So my question is: How does it work? Where does Nautilus (or Ubuntu, I don't know) get the picture from? How do I edit it? Thanks in advance! -Amir

    Read the article

  • A Technical Perspective On Rapid Planning

    - by Robert Story
    Upcoming WebcastTitle: A Technical Perspective On Rapid PlanningDate: April 14, 2010 Time: 11:00 am EDT, 9:00 am MDT, 8:00 am PDT, 16:00 GMT Product Family: Value Chain PlanningSummary Oracle's Strategic Network Optimization (SNO) product is a powerful supply chain design and tactical planning tool.  This one-hour session is recommended for functional users who want to gain a better understanding of how Oracle's SNO solution can help you solve complex supply chain issues, including supply chain design, risk management, logistics planning, sustainability planning, and a whole lot in between! Find out how SNO can be used to solve many different types of real-world business issues. Topics will include: Risk/Disaster Management Carbon Emissions Management Global Sourcing Labor/Workforce Planning Product Mix Optimization A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How to force browsers to always reload xslt files?

    - by bitmask
    Related: Apache: How can I force the browser to reload CSS files? I'm building an xml page (on an apache2) that is supposed to be translated to xhtml by the browser, so my server also serves a main.xslt which is used as stylesheet by the xml file, similar to the scenario with the css files in the linked question. However, none of tricks provided in either that answer, nor some issues on SO solve the issue for Opera. While Firefox responds to F5 by fetching not only the xml file but also the xslt file, Opera only reloads the xml file. I tried both, setting the Last-Modified HTTP header via an .htaccess file and using the expires module of apache2. This is what my .htaccess looks right now: AddType text/xsl;charset=utf-8 .xslt ExpiresByType text/xsl "modification plus 1 second" Header set Last-Modified "Wed, 08 Jan 2000 23:11:55 GMT" #Header set Last-Modified "Wed, 08 Jan 2020 23:11:55 GMT" If I open the xsl myself and manually reload it, the xml presentation is updated as well, but this is tedious for development. Note: There is no php or any kind of scripting involved. Everything is static.

    Read the article

  • SQL SERVER – Relationship with Parallelism with Locks and Query Wait – Question for You

    - by Pinal Dave
    Today, I have one very simple question based on following image. A full disclaimer is that I have no idea why it is like that. I tried to reach out to few of my friends who know a lot about SQL Server but no one has any answer. Here is the question: If you go to server properties and click on Advanced you will see the following screen. Under the Parallelism section if you noticed there are four options: Cost Threshold for Parallelism Locks Max Degree of Parallelism Query Wait I can clearly understand why Cost Threshold for Parallelism and Max Degree of Parallelism belongs to Parallelism but I am not sure why we have two other options Locks and Query Wait belongs to Parallelism section. I can see that the options are ordered alphabetically but I do not understand the reason for locks and query wait to list under Parallelism. Here is the question for you – Why Locks and Query Wait options are listed under Parallelism section in SQL Server Advanced Properties? Please leave a comment with your explanation. I will publish valid answers on this blog with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Mount Ubuntu shares remotely with Mac and Windows

    - by Donald
    First time on here so please be gentle! I have setup a small school network with a Ubuntu 12.04 Server for use as a fileserver mainly. I have managed to set the server up (all command line based - no GUI) and setup the Samba shares, which works really well internally. Internally, the school have a combination of Mac's and Windows machines and they can all access the shares happily. The school has a fixed IP ADSL connection and I have added a route in the router to allow me remote access to the server using SSH (port 22). That also works well. All good so far! What I now want to do is allow remote access to the shares. I have done a bit of reading and thought I had found the answer with SSHFS but I am still non-the-wiser. So, my basic questions is: In Windows, how can I map to the Ubuntu shares across the internet through the router? In Mac OS, how can I add the remote share across the internet? The school used to have a Windows server and the users were used to creating a VPN and then pulling up the share folders etc, but I'm unsure how to do this with the Ubuntu server. I assume I need to add another route through the router too allow for SSHFS or something similar? Thanks in advance... Donald

    Read the article

  • "Integratable" but not "integrated" GPL

    - by mgibsonbr
    There has been much debate over whether or not merely linking to a piece of code makes it a derivative work. I know FSF says "yes", so according to them I can't dynamically link a non-GPL compatible program to a GPL library and distribute the whole. But I could do that for private use, as long as no code is released to the public. That made me wonder: what if I don't redistribute the GPL code at all? If my program can work alone (reinforcing my claim that it's not a derivative work), but can do more if the GPL library is also installed to the system, couldn't I just release my application under my own licensing terms - without including any GPL code - and post instructions for anyone interested to separately download the GPL code and do the integration "for their private use"? I know it's against the "spirit" of the GPL, so I'm not suggesting it's a good idea to do that. However, this question is bugging me for some time, specially because of the implications of each answer: If I can not do that: can I write another library with a similar API? (before answering "of course you can", remember that having the same API would allow both libraries to be swapped at will by my customers - so I don't need to work too hard on my library or even make it "working". How to determine if a similar program is just similar or is a circumvention attempt?) If I can do that: can I also be paid to perform the service of installing the GPL library for a customer? (I sell them my program, install it in their machines, download and install the GPL library too) can I put the two programs in the same website? In two different CDs? (I know I said the idea was not to redistribute the GPL code, I'm just thinking in excuses people could use to claim they're not redistributing even though they are)

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • Pub banter - content strategy at the ballot box?

    - by Roger Hart
    Last night, I was challenged to explain (and defend) content strategy. Three sheets to the wind after a pub quiz, this is no simple task, but I hope I acquitted myself passably. I say "hope" because there was a really interesting question I couldn't answer to my own satisfaction. I wonder if any of you folks out there in the ethereal internet hive-mind can help me out? A friend - a rather concrete thinker who mathematically models complex biological systems for a living - pointed out that my examples were largely routed in business-to-business web sales and support. He challenged me with: Say you've got a political website, so your goal is to have somebody read it and vote for you - how do you measure the effectiveness of that content? Well, you would. umm. Oh dear. I guess what we're talking about here, to yank it back to my present comfort zone, is a sales process where your point of conversion is off the site. The political example is perhaps a little below the belt, since what you can and can't do, and what data you can and can't collect is so restricted. You can't throw up a "How did you hear about this election?" questionnaire in the polling booth. Exit polls don't pull in your browsing history and site session information. Not everyone fatuously tweets and geo-tags each moment of their lives. Oh, and folks lie. The business example might be easier to attack. You could have, say, a site for a farm shop that only did over the counter sales. Either way, it's tricky. I fell back on some of the work I've done usability testing and benchmarking documentation, and suggested similar, quick and dirty, small sample qualitative UX trials. I'm not wholly sure that was right. Any thoughts? How might we measure and curate for this kind of discontinuous conversion?

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

  • The lifecycle of "cool"

    - by Dori
    I've been thinking lately about how some programming projects/products become "cool," and in particular, how that trend can later reverse. Here are two examples that might better explain my context: Textmate Whenever someone asks about text editors on OS X, the answer on the SE sites is an automatic "Textmate!" But looked at objectively: Textmate 1.0 shipped October 2004 Textmate 1.5 shipped January 2006 Textmate 2 was announced February 2006 As of September 2010, the currently shipping version is 1.5.9 In all of 2010, there have been a total of three posts on the Textmate blog At what point (if ever) do Textmate fans start thinking about switching to another text editor? When it breaks after some future Apple update? When alpha geeks they respect start recommending something else? Or? jQuery Whenever a JavaScript-related question is asked on the SE sites, the knee-jerk response is "jQuery!" I've seen it happen even when the question itself only required a single line of JavaScript. Or when the question could be better answered by using CSS. Do the answerers understand they're suggesting a blowtorch to light a candle? That they're recommending adding 70K or so of code to do something trivial? Or is it a symptom of "When you have a hammer, everything looks like a nail"—that is, jQuery is all they know how to do, so that's their recommendation? And do they understand that while they may know jQuery well, that doesn't necessarily mean that they know JavaScript? Is there a way to explain that learning JavaScript would make them better jQuery programmers? My bigger-picture questions: Is this niche focus primarily a trait of programmers? How do you get programmers to not immediately jump to recommending their personal favorites? What can motivate programmers to review their initial selection criteria and possibly modify their choice? Your thoughts?

    Read the article

  • SDL & Windows 8 Metro WinRT

    - by Adrian
    I am just beginning to dip my tow into game programming and have been reading up on SDL, SFML, OpenGL, XNA, MonoGame and of course DirectX. (Needless to say there are a lot of choices out there) As much as I like SFMLs syntax I have chosen to read up and start with SDL as it is pretty ubiquitous and available on every platform (Windows, Linux, Mac) and also available on portable devices (Android, iOS) with the current exception of WinPhone 7 After that pre-amble here is my question. I notice that the docs say that for the windows platform the SDL API calls through to DirectX for higher perf. ( http://www.libsdl.org/intro.en/whatplatforms.html ) Microsoft have said that for Metro Game Apps you can only use DirectX (which means no XNA, no OpenGL, no SFML, etc, etc) My question is: If SDL just wraps DirectX calls will I (we) be able to use SDL to bring games to the new Metro WinRT environment and Windows 8 marketplace? This would be great if possible. Additionally as WinPhone 8 is supposedly built on Win8 then this could mean SDL would be available on the win phone in the future too. Thanks for your time in responding to this question and I look forward to hearing your response. EDIT: Based on DeadMG's answer I have installed Visual Studio 11 (beta) in Windows 8 Consumer Preview (CP) and went file-New to check project types. The project types: "Blank Application", "Direct2D Application" and "Direct3D Application" look of interest. I have selected "Direct2D App" but SDL generates its own window when you call: SDL_INIT Is it possible to link/setup the SDL window to point to the Direct2D surface in the this project?

    Read the article

  • What steps should I follow to start developing website applications?

    - by Oscar Mederos
    Hello, I've been developing desktop applications for about 4 years, using .NET, C++, C, and a little of Python. I've covered lots of topics while developing my applications, and even web technologies (cookies, GET/POST methods, when programming some scrapers/crawlers). I've been always waiting to start developing websites, preferably using PHP + MySQL, although other advises will be welcomed to make this question more useful and generic for others. I know I could use a CMS instead of starting from scratch, but sometimes I don't need an entire CMS to do minor things... What steps should I follow to create a website? Let's suppose I have a web designer. First of all, the designer designs the entire website (CSS, etc) and then I do the programming stuffs, like loading dynamically things from databases, doing some client-side stuffs with javascript, etc? Or how is the best way to do it? Edit: I'm not looking for tools/frameworks/languages suggestions. What I want to know is how a team (or a developer with a designer) starts creating a website. The steps they do, what tasks they do first, how they integrate the work, etc. An example of an answer could be: 1) Design the entire website with good CSS practices, using containers instead of tables in some cases, etc. 2) Use that design and develop the logic or the functionalities of the website. Of course, that's just an example. I'm looking for a good way to approach it, because I've been wanting to start on it but don't really know how exactly to organize the job :/

    Read the article

  • SQL Contest – Result of Cartoon Contest

    - by pinaldave
    Earlier we had an excellent contest ran with the help of Embarcadero Technologies. We had two different contests on the same day sponsored by the kind folks at Embarcadero. Here are the details of the winners. 1) Win USD 25 Amazon Gift Cards (10 Units) We had announced that we will award USD 25 Amazon Gift Cards to 10 lucky winners who will download the DB Optimizer between Nov 29 to Dec 8. Here is the name of the winners. Winners will get Amazon Gift Cards USD 25 in the next 5 days of this blog post to their registered email address. If you do not receive the card, do send me email (Pinal at sqlauthority.com) and I will follow up on the details. Name of the winners: Ramdas Narayanan Krishna Uppuluri Donna Kray Santosh Gupta Robert Small Samit Bhatt Bernd Baumanns Rodrigo Oriola Jim Woodin Alfred Sandou 2) Win Star Wars R2-D2 Inflatable R/C We had cartoon contest. If you have not read the cartoon – I suggest you go over this cartoon story one more time. The task was to give the correct answer with some interesting note along with it. We selected a few good quotes and put them together. We later on picked the winner by using random algorithm. The winner gets fantastic Star Wars R2-D2 Inflatable R/C. Name of the winner: Aadhar Joshi. He wins R2-D2. You can read his comment over here. Thank you all for participating in the contest – this was fun – if you have liked it do let me know and we will come up with something new for you next time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A Technical Perspective On Rapid Planning

    - by Robert Story
    Upcoming WebcastTitle: Strategic Network Optimization - One Solution for Many Problems!Date: April 14, 2010 Time: 11:00 am EDT, 9:00 am MDT, 8:00 am PDT, 16:00 GMT Product Family: Value Chain PlanningSummary This one-hour session is recommended for System Administrators, Database Administrators, and Technical Users seeking a general overview of Rapid Planning, installation issues, and debug information. This webcast is intended to provide users with insight into known issues, and an overview of the debugging possibilities for Rapid Planning. Topics will include: Benefits of using simulation planning Installing Oracle Rapid planning, points to be aware of Relevant tables Rapid planning log files Information needed by supportA short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How to zero out a drive?

    - by Mohd Arafat Hossain
    I'm currently running Ubuntu 12.04 and want to completely format my laptop so I can install Windows 7 (replace Ubuntu). When I put in my bootable USB with the OS it just shows a black screen with a white blinking cursor on the top left corner. I wait for hours (to be specific 5 whole hours) waiting for something to happen but I get nothing, so I pull out the USB and my Ubuntu 12.04 loads up. Repeated this several times with putting different boot priority options on top-est that relate to USB but the results are same. I go and visit some sites on the net like this http://www.techspot.com/community/topics/cant-replace-ubuntu-with-windows.175716/ that say I have to zero out my hard drive. My question is how? Note: Erasing out all my data is no problem cause I have nothing important to backup. If I have to lose my primary OS (Ubuntu 12.04) in the process I am ready to as my aim is just to install Windows 7 successfully. Please don't answer that there is something wrong with my USB or my USB reader/port/hardware or the content inside the USB cause they all works fine on my brothers PC as it boots up flawlessly.

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Dependencies lib32asound2 [duplicate]

    - by The Mini John
    This question already has an answer here: 'teamviewer depends on (…)' while trying to install TeamViewer 4 answers I was trying to install Teamviewer, but i was getting a dependencies error. I tried to install them but with no luck.. I think Mod's are not reading the questions trough when they mark as duplicate I'm getting this Error: Unpacking teamviewer (from teamviewer_linux_x64.deb) ... dpkg: dependency problems prevent configuration of teamviewer: teamviewer depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer depends on lib32z1; however: Package lib32z1 is not installed. teamviewer depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer (--install): dependency problems - leaving unconfigured Errors were encountered while processing: teamviewer I tried sudo apt-get -f install but getting Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 Package lib32asound2 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lib32asound2' has no installation candidate E: Package 'ia32-libs' has no installation candidate i cant even get to the sudo dpkg -i teamviewer_linux_x64.deb If i force installation sudo dpkg --force-depends -i teamviewer_linux_x64.deb Although it's "Setting up Temviewer" it gives me this I'm fairly new to ubuntu, can anyone help me out ? I'm on Ubuntu 13.10

    Read the article

  • GnoMenu integration in docky.

    - by Lyon Apostol
    Posting the error: Running docky --debug < ...some other stuff... > [Info 10:27:52.535] [Helper] Starting GnoMenu.py [Info 10:27:52.560] [HelperService] Helper added: /usr/share/dockmanager/scripts/GnoMenu.py [Info 10:27:52.757] [Helper] GnoMenu.py :: No module named dockmanager.dockmanager [Info 10:27:52.758] [Helper] GnoMenu.py :: gconf backend [Info 10:27:52.758] [Helper] GnoMenu.py :: 239 [Error 10:27:52.759] [Helper] GnoMenu.py :: Traceback (most recent call last): [Error 10:27:52.759] [Helper] GnoMenu.py :: File "/usr/share/dockmanager/scripts/GnoMenu.py", line 120, in < module > [Error 10:27:52.759] [Helper] GnoMenu.py :: class DockyGnoMenuItem(DockManagerItem): [Error 10:27:52.759] [Helper] GnoMenu.py :: NameError: name 'DockManagerItem' is not defined [Info 10:27:52.874] [Helper] GnoMenu.py has exited (Code 1). After installing dockmanager Now this is what happens: [Info 11:34:36.166] [Helper] GnoMenu.py :: gconf backend [Info 11:34:36.166] [Helper] GnoMenu.py :: 136 [Info 11:34:36.166] [Helper] GnoMenu.py :: DockManagerSink(): System.NotSupportedException: Cannot send null variant [Info 11:34:36.210] [Helper] GnoMenu.py has exited (Code 0). I tried to click the icon at the dock: gconf backend 380 DockManagerSink(): System.NotSupportedException: Cannot send null variant gconf backend 380 DockManagerSink(): System.NotSupportedException: Cannot send null variant How can I fix this? Thanks for your prompt answer!

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Why are SW engineering interviews disproportionately difficult?

    - by stackoverflowuser2010
    First, some background on me. I have a PhD in CS and have had jobs both as a software engineer and as an R&D research scientist, both at Very Large Corporations You Know Very Well. I recently changed jobs and interviewed for both types of jobs (as I have done in the past). My observation: SW engineer job interviews are way, way disproportionately more difficult than CS researcher job interviews, but the researcher job is higher paying, more competitive, more rewarding, more interesting, and has a higher upside. Here's a typical interview loop for researcher: Phone interview to see if my research is in alignment with the lab's researcher In-person, give presentation on my recent research for one hour (which represents maybe 9 month's worth of work), answer questions In-person one-on-one interviews with about 5 researchers, where they ask me very reasonable questions on my work/publications/patents, including: technical questions, where my work fits into related work, and how I can extend my work to new areas Here's a typical interview loop for SW engineer: Phone interview where I'm asked algorithm questions and maybe do some coding. Pretty standard. In-person interviews at the whiteboard where they drill the F*** out of you on esoteric C++ minutia (e.g. how does a polymorphic virtual function call work), algorithms (make all-pairs-shortest-path algorithm work for 1B vertices), system design (design a database load balancer), etc. This goes on for six or seven interviews. Ridiculous. Why would anyone be willing to put up with this? What is the point of asking about C++ trivia or writing code to prove yourself? Why not make the SE interview more like the researcher interview where you give a talk about what you've done? How are technical job interviews for other fields, like physics, chemistry, civil engineering, mechanical engineering?

    Read the article

  • Network and Storage Devices Throughput Chart

    - by zroiy
    With all of the different storage and network devices that surround our day to day life, understanding these devices data transfer speeds can be somewhat confusing. Think about trying to identify your weakest link in the a chain that starts with an external USB hard drive (or a flash drive) that's connected to a 802.11g wifi router, can you quickly come up with an answer of where's the bottle neck in that chain , is it the router or the storage devices ? . Well, the following chart should give you an idea understanding different devices, protocols and interfaces maximum throughput speeds. Though these numbers can fluctuate (mostly for worse, but sometimes for the better) due to different kind of factors such as OS overhead (or caching and optimization) , multiple users or processes and so on , the chart can still serve to provide basic information on the theoretical throughput different devices and protocols can get to.. Enjoy.  Link to the full size chart   References:http://en.wikipedia.org/wiki/Sata#SATA_revision_1.0_.28SATA_1.5_Gbit.2Fs.29http://en.wikipedia.org/wiki/Usbhttp://en.wikipedia.org/wiki/Usb_3http://en.wikipedia.org/wiki/802.11http://mashable.com/2011/09/21/fastest-download-speeds-infographic/http://en.wikipedia.org/wiki/Thunderbolt_(interface)http://www.computerworld.com/s/article/9220434/Thunderbolt_vs._SuperSpeed_USB_3.0  Icons:http://openiconlibrary.sourceforge.net/gallery2/?./Icons/devices/drive-harddisk-3.png      

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • Writing cross-platforms Types, Interfaces and Classes/Methods in C++

    - by user827992
    I'm looking for the best solution to write cross-platform software, aka code that I write and that I have to interface with different libraries and platforms each time. What I consider the easiest part, correct me if I'm wrong, is the definition of new types, all I have to do is to write an hpp file with a list of typedefs, I can keep the same names for each new type across the different platforms so my codebase can be shared without any problem. typedefs also helps me to redefine a better scope for my types in my code. I will probably end up having something like this: include |-windows | |-types.hpp |-linux | |-types.hpp |-mac |-types.hpp For the interfaces I'm thinking about the same solution used for the types, a series of hpp files, probably I will write all the interfaces only once since they rely on the types and all "cross-platform portability" is ensured by the work done on the types. include | |-interfaces.hpp | |-windows | |-types.hpp |-linux | |-types.hpp |-mac | |-types.hpp For classes and methods I do not have a real answer, I would like to avoid 2 things: the explicit use of pointers the use of templates I want to avoid the use of the pointers because they can make the code less readable for someone and I want to avoid templates just because if I write them, I can't separate the interface from the definition. What is the best option to hide the use of the pointers? I would also like some words about macros and how to implement some OS-specifics calls and definitions.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >