Search Results

Search found 4242 results on 170 pages for 'mark lawtie'.

Page 25/170 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • SQL Source Control Contest

    - by Ajarn Mark Caldwell
    If you’re a regular reader of this blog, you know that I have written several posts about how important I think it is to protect your source code, to version it, and in particular, all the aspects I like about Red Gate’s SQL Source Control product.  But for a moment, let’s take a break from my writing and I want to hear your stories.  What nightmare situation are you in, or can you imagine, where source control for your database would save the world.  Or maybe your life is not so dramatic, but you do see a challenge that, if you just had a good tool like SQL Source Control, it would go much smoother.  What’s your pain?  You have read my writings, now tell me your story, and be in the running for a free copy of SQL Source Control from Red Gate. Yes, that’s right.  Although I am just a fan of Red Gate, they have authorized me to give out a handful of licenses to blog readers who are willing to share their story by posting a comment to this blog entry.  Simply add your comment below (be sure to include a valid email address in the box that asks for that) to be entered.  The contest starts immediately and over the next few days, the best stories will win.

    Read the article

  • Double entries in the gnome 3 task bar

    - by Mark
    I am running Ubuntu 12.04 with Gnome 3. All was working well except that graphics were slow and even moving a window on the screen seemed slow. I installed the fglrx ati driver. Which seems to have improved matters. But on first login I had all gnome items duplicated. That is my task bar has the ubuntu sign, then says Applications, then places then the ubuntu sign then Aplications and then places. Any application I run produces two icons at the bottom of the screen. This was after a reboot. I rebooted again. Now I have 3!! On the right each set of icons such as printer is also trippled!! See screenshot at http://jetmark.co.uk/Screenshot.png See Dmesg at http://jetmark.co.uk/dmesg.txt Any suggestions welcome. Reboot - now I have 4!! So one set gets added on each reboot. Help!! I am going to be task barred out before long!!

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • Can't connect to or search Wi-Fi

    - by Mark
    I just installed the latest version of Ubuntu; I think it was 13 or something. Anyway, I got it running, and now I've been trying to connect to my Wi-Fi AP. No matter where I go I don't see an option for Wi-Fi. I looked on the computer and found some commands to use in a terminal to run a scan for Wi-Fi networks, but all of them returned the error: interface doesn't support scanning What are step-by-step directions to make this work?

    Read the article

  • Migrating R Scripts from Development to Production

    - by Mark Hornick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 “How do I move my R scripts stored in one database instance to another? I have my development/test system and want to migrate to production.” Users of Oracle R Enterprise Embedded R Execution will often store their R scripts in the R Script Repository in Oracle Database, especially when using the ORE SQL API. From previous blog posts, you may recall that Embedded R Execution enables running R scripts managed by Oracle Database using both R and SQL interfaces. In ORE 1.3.1., the SQL API requires scripts to be stored in the database and referenced by name in SQL queries. The SQL API enables seamless integration with database-based applications and ease of production deployment. Loading R scripts in the repository Before talking about migration, we’ll first introduce how users store R scripts in Oracle Database. Users can add R scripts to the repository in R using the function ore.scriptCreate, or SQL using the function sys.rqScriptCreate. For the sample R script     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100) users wrap this in a function and store it in the R Script Repository with a name. In R, this looks like ore.scriptCreate("RandomRedDots", function () { line-height: 115%; font-family: "Courier New";">     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)) }) In SQL, this looks like begin sys.rqScriptCreate('RandomRedDots',  'function(){     id <- 1:10     plot(1:100,rnorm(100),pch=21,bg="red",cex =2)     data.frame(id=id, val=id / 100)   }'); end; / The R function ore.scriptDrop and SQL function sys.rqScriptDrop can be used to drop these scripts as well. Note that the system will give an error if the script name already exists. Accessing R scripts once they’ve been loaded If you’re not using a source code control system, it is possible that your R scripts can be misplaced or files modified, making what is stored in Oracle Database to only or best copy of your R code. If you’ve loaded your R scripts to the database, it is straightforward to access these scripts from the database table SYS.RQ_SCRIPTS. For example, select * from sys.rq_scripts where name='myScriptName'; From R, scripts in the repository can be loaded into the R client engine using a function similar to the following: ore.scriptLoad <- function(name) { query <- paste("select script from sys.rq_scripts where name='",name,"'",sep="") str.f <- OREbase:::.ore.dbGetQuery(query) assign(name,eval(parse(text = str.f)),pos=1) } ore.scriptLoad("myFunctionName") This function is also useful if you want to load an existing R script from the repository into another R script in the repository – think modular coding style. Just include this function in the body of the other function and load the named script. Migrating R scripts from one database instance to another To move a set of functions from one system to another, the following script loads the functions from one R script repository into the client R engine, then connects to the target database and creates the scripts there with the same names. scriptNames <- OREbase:::.ore.dbGetQuery("select name from sys.rq_scripts where name not like 'RQG$%' and name not like 'RQ$%'")$NAME for(s in scriptNames) { cat(s,"\n") ore.scriptLoad(s) } ore.disconnect() ore.connect("rquser","orcl","localhost","rquser") for(s in scriptNames) { cat(s,"\n") ore.scriptDrop(s) ore.scriptCreate(s,get(s)) } Best Practice When naming R scripts, keep in mind that the name can be up to 128 characters. As such, consider organizing scripts in a directory structure manner. For example, if an organization has multiple groups or applications sharing the same database and there are multiple components, use “/” to facilitate the function organization: line-height: 115%;">ore.scriptCreate("/org1/app1/component1/myFuntion1", myFunction1) ore.scriptCreate("/org1/app1/component1/myFuntion2", myFunction2) ore.scriptCreate("/org1/app2/component2/myFuntion2", myFunction2) ore.scriptCreate("/org2/app2/component1/myFuntion3", myFunction3) ore.scriptCreate("/org3/app2/component1/myFuntion4", myFunction4) Users can then query for all functions using the path prefix when looking up functions. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Is it possible to auto-mount sshfs

    - by Mark D
    Is it possible to auto-mount a remote FS using sshfs upon instantiating a proper VPN connection? Allow me to explain the scenario, I'm working remotely, to do that it helps if I can mount my home dir from a server in the office. To do that I need to vpn in. So within network manager I select the relevant VPN and connect. It connects but now I have to drop to the command line and mount my home dir on several machines. If I forget to do one machine my local dev environment isn't as efficient. I suppose I could write a quick bash script to do this but I'd rather get it running automatically when I connect.

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • How can I neatly embed Flash in a page in a way that is cross-browser compatible?

    - by Mark Hatton
    When I receive Flash objects from my designer, it comes with an example HTML page which includes both <object> tags and <embed> tags as well as a whole heap of JavaScript. If I copy and paste this code in to my webpage, it works, but the code looks a mess (and there is so much of it!). If I remove the extra code and try either just <embed> or <object> on their own, it works in some browsers, but not others. Is there a neat, minimal method that works in all the major browsers?

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • Hurricane Season 2010 Starts

    - by Mark Treadwell
    Here we are at the start of another hurricane season.  As with past years, I have been preparing.  Last year I had the house painted with a high-quality paint, giving the stucco its best waterproofing possible.  I have a few cracks to patch with elastomeric patching material.  I will paint it to match the house later.  I also need to clean out the anchors for the lower angle brackets and reattach them.  I had removed the brackets for painting. You can read all my past hurricane entries here.  The predictors are promising a busy season.  We have heard that one before, so I put little credence in long range press releases.  Other than that we are pretty much ready.  I have a few new plans I will blog about later, but for now we get to listen to our daily thunderstorms outside.

    Read the article

  • Can I use the test suite from an open source project to verify that my own 'compatible library' is compatible?

    - by Mark Booth
    The question Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project? makes me wonder what would be considered a clean-room implementation in the era of open source projects. Hypothetically, if I were to develop a library which duplicates the publicly documented interface of an open-source library, without ever looking at the source code for that library, could that code ever be considered a derivative work? Obviously it would need the same class hierarchy and method signatures, so that it could be a drop-in replacement - could that in itself, be enough to provoke a copyright claim? What about if I used the test suite of the open source project to verify whether my clean implementation behaved in the same way as the original library? Would using the test suite be enough to dirty my clean code? As should be expected from a question like this, I am not looking for specific legal advice, but looking to document experiences people may have had with this sort of issue.

    Read the article

  • Project Jigsaw: On the next train

    - by Mark Reinhold
    I recently proposed to defer Project Jigsaw from Java 8 to Java 9. Feedback on the proposal was about evenly divided as to whether Java 8 should be delayed for Jigsaw, Jigsaw should be deferred to Java 9, or some other, usually less-realistic, option should be taken. The ultimate decision rested, of course, with the Java SE 8 (JSR 337) Expert Group. After due consideration, a strong majority of the EG agreed to my proposal. In light of this decision we can still make progress in Java 8 toward the convergence of the higher-end Java ME Platforms with Java SE. I previously suggested that we consider defining a small number of Profiles which would allow compact configurations of the SE Platform to be built and deployed. JEP 161 lays out a specific initial proposal for such Profiles. There is also much useful work to be done in Java 8 toward the fully-modular platform in Java 9. Alan Bateman has submitted JEP 162, which proposes some changes in Java 8 to smooth the eventual transition to modules, to provide new tools to help developers prepare for modularity, and to deprecate and then, in Java 9, actually remove certain API elements that are a significant impediment to modularization. Thanks to everyone who responded to the proposal with comments and questions. As I wrote initially, deferring Jigsaw to a Java 9 release in 2015 is by no means a pleasant decision. It does, however, still appear to be the best available option, and it is now the plan of record.

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • Can't install ubuntu 12.04 64 bit and 32 bit

    - by Mark
    I downloaded The Ubuntu 12.04 64 bit, i burned it from a cd and when I tried to install it i only and get a black screen and saying Peter Arvin et al could not find kermel image I thought there's a problem with the file so i downloaded it again and install it then i always get the same message. I tried also 32 bit and i always get the same error. I tried also installing it from a usb and i always get the same. So i just downloaded the WUBI and works so fine to my computer. I'm using Sony Vaio E Series with a 32 bit system and 500 GB hard Disk Drive.

    Read the article

  • CPU spikes to 100% when trying to watch video

    - by mark
    I just installed Ubuntu 12.04 and now CPU spikes to 100% when trying to watch a video. I don't want to hear that this is a problem with Flash or whatever... Playing video on my machine with Ubuntu 11 was just fine. In fact I have another machine still with Ubuntu 11 and it continues to play video just fine. Now I'm stuck with this 12.04 version and want to know how to downgrade back to version 11 as that version isn't buggy. (hence a downgrade).

    Read the article

  • Running an Application on a Different Domain

    - by Mark Flory
    Were I am contracting at right now has a new development domain.  Because of IT security rules it is fairly isolated from the domain my computer normally logs into (for e-mail and such).  I do use a VM to log directly into the domain but one of my co-workers found this command to run things on your box but in the other domain.  Pretty cool. For example this runs SQL Server Management Tool for SQL Server 2008: runas /netonly /user:{domain}\{username} "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\ssms.exe" And this runs visual studios: runas /netonly /user:{domain}\{username} "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" It does not solve the problem I wanted to solve which would be to be able to assign Users/Groups in Team Explorer.  It instead still uses the domain I am logged into's groups.

    Read the article

  • Have search engines recognise multi-lingual sites

    - by mark
    Hello. I have organised my site by language using the following structure: www.domain.com #spanish language version www.domain.com/en #english language version The site is of Spanish subject matter hence Spanish language takes priority at root of domain. Although the site is new at this stage I would hope visiting from google.com and google.es would return English and Spanish versions respectively. Are there any particular steps I need take to separate the two. Should I add them both as individual sites in Google's webmaster tools and also should I submit individual site maps for each 'site' or as a whole? Thanks in advance.

    Read the article

  • I still can't figure out how to program!

    - by Mark K.
    Please help! I've read lots of programming books for various languages, Java, Python, C, etc. I understand and know all of the basics of the languages and I understand algorithms and data structures. (Equivilant of say 2 years of CompSci classes) BUT, I still can't figure how to write a program that does anything useful. All of the programming books show you how to write the language, but NOT how to use it! The programming examples are all very basic like build a card catalog for a library or a simple game or use algorithms etc... They dont't show you how to develop complex programs that actually do anything useful! I've looked a open-source programs on sourceforge, but they don't make much sense to me. There are hundreds of files in each program & thousands of lines of code. But how do I learn how to do this? There's nothing in any book I can buy on Amazon that will give me the tools to write any of these programs. How do you go from reading Intro to Java or Programming Python, or C Programming Language, etc.. to actually being able to say, I have an idea for X Program.. this is how I go about developing it? It seems like there is so much more involved in writing a program than you can learn in a book or from a class. I feel like there is someth Can anyone put me on the right track?

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • Ubuntu 12.10 wont boot

    - by John Mark High
    im new to using Ubuntu and just bought a HP Pavilion g6-2240sa with windows 8 pre installed. I made a bootable USB with Ubuntu on it and installed alsongside windows, for 2 days it worked fine, I got into Ubuntu by doing an advanced restart from windows 8 and then booting Ubuntu from the partition it made. When i did the advanced restart today there was only 1 HDD i could select ( there were 3 before windows, a restore partition that was already there when i got the computer and Ubuntu) so i booted from the USB again and re-installed Ubuntu. then did an advanced restart and the 3 partitions where there again, i booted from Ubuntu and now heres my problem. I get the Ubuntu background when its loading then its just a black screen with some writing, its not on long enough to read, then just a black screen with a white _ and the top left corner that does nothing, i have to restat the computer and it auto boots into windows 8. im a little confused as the first time i installed Ubuntu it worked fine until the partition dissaperd), the second time i installed i did everything the same except it found the old Ubuntu so i reinstalled it and now is donst work.

    Read the article

  • How do I sync music to my Sony Walkman (Z Series) using Rhythmbox?

    - by Mark Paskal
    I have recently purchased the new Walkman Z from Sony. I can transfer music by mounting it as a drive, but I would prefer to use Rhythmbox to do so. My other Android devices and MP3 players from the past have always just shown up without any tweaking. Using Nautilus to transfer the files is possible but for some reason Nautilus still makes a trash folder on removable drives. Deleting the trash folder anew is really annoying to do every time I delete music from the device. How can I use Rhythmbox to transfer and remove songs instead?

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • ACPI=OFF in Ubuntu 11.10

    - by Mark
    When I tried to upgrade from 11.04 to 11.10 the system froze, so I've been playing around with a couple other linux builds (Fedora, Mint, and Puppy) the last couple days and I keep coming around to the same problem: a lockup during boot; each build referencing a kernel error. On another board someone suggested booting with a boot up line of "ACPI = off". It works with other OS', but I'm not sure where to put this command. Can anyone 'enlighten' me, please?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >