Search Results

Search found 3630 results on 146 pages for 'procedure'.

Page 91/146 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Attempting to dual boot Ubuntu and Windows 7 on Sony Vaio with Insyde H2O BIOS

    - by zach
    My situation is the same that is addressed here Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI and here http://www.hackourlife.com/sony-vaio-with-insyde-h2o-efi-bios-ubuntu-12-04-dual-boot I tried to install Ubuntu 12.04 from the Live CD alongside my current Windows 7. I have to switch my BIOS to legacy mode in order to boot from CD. If I were to do a normal installation and remain in legacy mode, the BIOS will display "operating system not found". If I switch back then the BIOS just boots to windows. To solve the problem, I tried following the steps in the previous two articles. My drive is partitioned as: sda1 FAT32 Location of Windows EFI files (flagged as boot in Ubuntu install) sda2 unknown sda3 NFTS Windows C: sda4 ext4 Ubuntu root sda5 swap sda6 ext4 Ubuntu home I was a little confused by the requirement in the second article to "be careful to install Grub bootloader in /dev/sda3" In my case, the relevant partition is sda1. I have tried three things: setting the sda1 mount point as /boot, as /boot/efi, and as the special reserved grub partition. In each install I indicated that grub should be installed in sda1. After each install I reboot to the live CD and look in the sda1. I see EFI/Boot and EFI/Windows, but no EFI/Ubuntu and consequently no grubx64.efi. I understand the recommended procedure of moving grubx64.efi into the EFI/Boot directory and replacing the present bootx64.efi file, but I see no grubx64.efi and I don't know where it should be.

    Read the article

  • How can I install Insight debugger?

    - by DandyWalker
    Hello. I am following along a book in which Insight debugger is required. I didn't find it on my Maverick. I googled and I found that it's not supported in debian anymore but I really need to install it. I tried to compile the source and it installed but keep telling me that tk is missing whenever i start it. I installed tk with sudo aptitude install tk then tried to run again it's the same. I compiled it one more time and nothing really changes. So please how can I install that ? Update: This is the message i get Tk_Init failed: Can't find a usable tk.tcl in the following directories: /usr/local/share/tk8.4 /usr/local/lib/tk8.4 /usr/lib/tk8.4 /usr/local/library /usr/library /usr/tk8.4.1/library /tk8.4.1/library /usr/local/share/tk8.4/tk.tcl: no event type or button # or keysym no event type or button # or keysym while executing "bind Listbox <MouseWheel> { %W yview scroll [expr {- (%D / 120) * 4}] units }" (file "/usr/local/share/tk8.4/listbox.tcl" line 182) invoked from within "source /usr/local/share/tk8.4/listbox.tcl" (in namespace eval "::" script line 1) invoked from within "namespace eval :: [list source [file join $::tk_library $file.tcl]]" (procedure "SourceLibFile" line 2) invoked from within "SourceLibFile listbox" (in namespace eval "::tk" script line 4) invoked from within "namespace eval ::tk { SourceLibFile button SourceLibFile entry SourceLibFile listbox SourceLibFile menu SourceLibFile panedwindow SourceLibFile ..." invoked from within "if {$::tk_library ne ""} { if {[string equal $tcl_platform(platform) "macintosh"]} { proc ::tk::SourceLibFile {file} { if {[catch { namesp..." (file "/usr/local/share/tk8.4/tk.tcl" line 393) invoked from within "source /usr/local/share/tk8.4/tk.tcl" ("uplevel" body line 1) invoked from within "uplevel #0 [list source $file]" This probably means that tk wasn't installed properly.

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • What is the value in hiding the details through abstractions? Isn't there value in transparency?

    - by user606723
    Background I am not a big fan of abstraction. I will admit that one can benefit from adaptability, portability and re-usability of interfaces etc. There is real benefit there, and I don't wish to question that, so let's ignore it. There is the other major "benefit" of abstraction, which is to hide implementation logic and details from users of this abstraction. The argument is that you don't need to know the details, and that one should concentrate on their own logic at this point. Makes sense in theory. However, whenever I've been maintaining large enterprise applications, I always need to know more details. It becomes a huge hassle digging deeper and deeper into the abstraction at every turn just to find out exactly what something does; i.e. having to do "open declaration" about 12 times before finding the stored procedure used. This 'hide the details' mentality seems to just get in the way. I'm always wishing for more transparent interfaces and less abstraction. I can read high level source code and know what it does, but I'll never know how it does it, when how it does it, is what I really need to know. What's going on here? Has every system I've ever worked on just been badly designed (from this perspective at least)? My philosophy When I develop software, I feel like I try to follow a philosophy I feel is closely related to the ArchLinux philosophy: Arch Linux retains the inherent complexities of a GNU/Linux system, while keeping them well organized and transparent. Arch Linux developers and users believe that trying to hide the complexities of a system actually results in an even more complex system, and is therefore to be avoided. And therefore, I never try to hide complexity of my software behind abstraction layers. I try to abuse abstraction, not become a slave to it. Question at heart Is there real value in hiding the details? Aren't we sacrificing transparency? Isn't this transparency valuable?

    Read the article

  • I want to sell my software [C# desktop application] but I was Stuck in licencing [closed]

    - by Surendra Soni
    Possible Duplicate: if I use .NET Framework for my application, do I have to pay anything to Microsoft? I have developed a desktop application called "Institute Management System" which has modules like Class manager, Subject manager, Topics manager, Student inquiry manager Student admission manager, Fees manager, Exam manager etc. using C# for the front end and MS Access for the back end. The main problem is that I want to sell it and earn some money but I heard that my application needs to be registered at Microsoft, and I would have to get a license from them for selling, and have to pay them money too. I have spent four months developing it at my own expense, and worked very hard to develop it. So I want some tips, advice, any suggestion for the same. Please also tell me the procedure for all of the required registrations and payment issues. And I also want to ask you if you Can you suggest any other technology where I develop my application and sell it without worrying about licencing and related issues? I am now more confused about that "MS technology is open source or not? "

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • RUEI 12.1.0.3.0 dependency requirement for php-soap-5.1.6

    - by sthieme
    Dear Readers,please be aware of the new php-soap-5.1.6 dependency in RUEI 12.1.0.3.For a swift upgrade to RUEI 12.1.0.3 you should be aware of this pre-requisite as it can be a time-eater to obtain individual rpm-packages inside of a datacenter for an old OS revision once you have started the upgrade process. You may use the following procedure to retrieve the required package via http://public-yum.oracle.com:Customers will have to check the /etc/issue, /etc/issue.net (or /etc/redhat-release for RHEL based OS) for their current release in order to obtain the fitting package version.Customers of OEL can download the packages from our public-yum.oracle.com Server: http://public-yum.oracle.com/repo/,  e.g. http://public-yum.oracle.com/repo/OracleLinux/OL5/8/base/x86_64/php-soap-5.1.6-32.el5.x86_64.rpmEarlier releases (up to 5.5) are located under the EnterpriseLinux instead of OracleLinux path, e.g.http://public-yum.oracle.com/repo/EnterpriseLinux/EL5/5/base/x86_64/php-soap-5.1.6-27.el5.x86_64.rpmNote: you will have to obtain the relevant RedHat rpm-packages via the login protected RHN URLs. Oracle can only provide support for Oracle Enterprise Linux and RHEL packages are not available publicly via rpm-seek.com to my knowledge. Kind regards,Stefan

    Read the article

  • C# - How to store and reuse queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about?

    Read the article

  • How to get Broadcom BCM 43XX Wireless card working

    - by Fer1805
    NOTE - Although this question is for a specific version of Ubuntu and a specific Broadcom model, the answer to it is for most Broadcom models and versions of Ubuntu 11.04, 11.10, 12.04 and 12.10. If you have come here from another question, feel free to read the answer instead. The answer covers many problems with several solutions. I'm having serious problems installing the Broadcom drivers for Ubuntu 11.04. It worked perfectly on my previous version, but now, it is impossible. I'm a user with no advance knowledge in Linux, so I would need clear explanations on make, compile, etc. I was following the instructions on the following blog, with no luck. How can I get Broadcom BCM4311 Wireless working? Can someone help me? Edit: For the command: "lspci | grep Network", I get the following message: 06:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) For the command: iwconfig, i get the following: lo no wireless extensions. eth0 no wireless extensions. When i follow the following steps (from the above link), there are a NO error message at all: open the 'Synaptic Package Manager' and search for bcm uninstall the bcm-kernel-source package make sure that the firmware-b43-installer and the b43-fwcutter packages are installed type into terminal: cat /etc/modprobe.d/* | egrep '8180|acx|at76|ath|b43|bcm|CX|eth|ipw|irmware|isl|lbtf|orinoco|ndiswrapper|NPE|p54|prism|rtl|rt2|rt3|rt6|rt7|witch|wl' (you may want to copy this) and see if the term blacklist bcm43xx is there if it is, type cd /etc/modprobe.d/ and then sudo gedit blacklist.conf put a # in front of the line: blacklist bcm43xx then save the file (I was getting error messages in the terminal about not being able to save, but it actually did save properly). reboot 'End of procedure' Before (not ubuntu 11.04), if i wanted to connect wireles, i just went to the icon at the upper side of the screen, click, showed ALL the wireless network available, and done. Now, the only options i see are: Wired Network Auto Eth0 Disconnect VPN Enable networking Connection information Edit connection. hope above info is enough for your help.

    Read the article

  • Entity Object Based on PL/SQL

    - by Manoj Madhusoodanan
    This blog describes how to create a PL/SQL based Entity Object.Oracle application has number of APIs and each API will perform numerous number of tasks.We can create PL/SQL based EO which will directly invoke the PL/SQL stored procedure from the EO. Here I am demonstrating using a standard API FND_USER_PKG.CREATEUSER.This API has x_user_name and x_owner as mandatory parameter.My task is to create a user through OAF page which will accept User Name and Password. Following steps needs to be performed to achieve the above scenario. 1) Create FndUserEO as follows Include all the API parameters and WHO columns in the EO. Make UserName and EncryptedUserPassword ( Here I am not using Encrypted Password. The column name is same as table column so I am keeping the same) column as mandatory. Generate VO. 2) Edit FndUserEOImpl and add the following 3) Attach FndUserVO to AM 4) Create the UI 5) Deploy following files to middle tier and restart the server.  Entity Object: xxcust.oracle.apps.fnd.user.schema.server.FndUserEO.xml xxcust.oracle.apps.fnd.user.schema.server.FndUserEOImpl.java View Object: xxcust.oracle.apps.fnd.user.server.FndUserVO.xml xxcust.oracle.apps.fnd.user.server.FndUserVOImpl.javaUser Interface: xxcust.oracle.apps.fnd.user.webui.CreateFndUserCO.java xxcust.oracle.apps.fnd.user.webui.CreateFndUserPG.xmlYou can test by giving User Name and Password.

    Read the article

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • Developing a SQL Server Function in a Test-Harness.

    - by Phil Factor
    /* Many times, it is a lot quicker to take some pain up-front and make a proper development/test harness for a routine (function or procedure) rather than think ‘I’m feeling lucky today!’. Then, you keep code and harness together from then on. Every time you run the build script, it runs the test harness too.  The advantage is that, if the test harness persists, then it is much less likely that someone, probably ‘you-in-the-future’  unintentionally breaks the code. If you store the actual code for the procedure as well as the test harness, then it is likely that any bugs in functionality will break the build rather than to introduce subtle bugs later on that could even slip through testing and get into production.   This is just an example of what I mean.   Imagine we had a database that was storing addresses with embedded UK postcodes. We really wouldn’t want that. Instead, we might want the postcode in one column and the address in another. In effect, we’d want to extract the entire postcode string and place it in another column. This might be part of a table refactoring or int could easily be part of a process of importing addresses from another system. We could easily decide to do this with a function that takes in a table as its parameter, and produces a table as its output. This is all very well, but we’d need to work on it, and test it when you make an alteration. By its very nature, a routine like this either works very well or horribly, but there is every chance that you might introduce subtle errors by fidding with it, and if young Thomas, the rather cocky developer who has just joined touches it, it is bound to break.     right, we drop the function we’re developing and re-create it. This is so we avoid the problem of having to change CREATE to ALTER when working on it. */ IF EXISTS(SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’                                      and schema_name(schema_ID)=‘Dbo’)     DROP FUNCTION dbo.ExtractPostcode GO   /* we drop the user-defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘AddressesWithPostCodes’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.AddressesWithPostCodes GO /* we drop the user defined table type and recreate it */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO   /* and now create the table type that we can use to pass the addresses to the function */ CREATE TYPE AddressesWithPostCodes AS TABLE ( AddressWithPostcode_ID INT IDENTITY PRIMARY KEY, –because they work better that way! Address_ID INT NOT NULL, –the address we are fixing TheAddress VARCHAR(100) NOT NULL –The actual address ) GO CREATE TYPE OutputFormat AS TABLE (   Address_ID INT PRIMARY KEY, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode )   GO CREATE FUNCTION ExtractPostcode(@AddressesWithPostCodes AddressesWithPostCodes READONLY)  /** summary:   > This Table-valued function takes a table type as a parameter, containing a table of addresses along with their integer IDs. Each address has an embedded postcode somewhere in it but not consistently in a particular place. The routine takes out the postcode and puts it in its own column, passing back a table where theinteger key is accompanied by the address without the (first) postcode and the postcode. If no postcode, then the address is returned unchanged and the postcode will be a blank string Author: Phil Factor Revision: 1.3 date: 20 May 2014 example:      – code: returns:   > Table of  Address_ID, TheAddress and ThePostCode. **/     RETURNS @FixedAddresses TABLE   (   Address_ID INT, –the address we are fixing   TheAddress VARCHAR(1000) NULL, –The actual address   ThePostCode VARCHAR(105) NOT NULL – The Postcode   ) AS – body of the function BEGIN DECLARE @BlankRange VARCHAR(10) SELECT  @BlankRange = CHAR(0)+‘- ‘+CHAR(160) INSERT INTO @FixedAddresses(Address_ID, TheAddress, ThePostCode) SELECT Address_ID,          CASE WHEN start>0 THEN REPLACE(STUFF([Theaddress],start,matchlength,”),‘  ‘,‘ ‘)             ELSE TheAddress END            AS TheAddress,        CASE WHEN Start>0 THEN SUBSTRING([Theaddress],start,matchlength-1) ELSE ” END AS ThePostCode FROM (–we have a derived table with the results we need for the chopping SELECT MAX(PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)) AS start,         MAX( CASE WHEN PATINDEX([matched],‘ ‘+[Theaddress] collate SQL_Latin1_General_CP850_Bin)>0 THEN TheLength ELSE 0 END) AS matchlength,        MAX(TheAddress) AS TheAddress,        Address_ID FROM (SELECT –first the match, then the length. There are three possible valid matches         ‘%['+@BlankRange+'][A-Z][0-9] [0-9][A-Z][A-Z]%’, 7 –seven character postcode       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 8       UNION ALL SELECT ‘%['+@BlankRange+'][A-Z][A-Z][A-Z0-9][A-Z0-9] [0-9][A-Z][A-Z]%’, 9)      AS f(Matched,TheLength) CROSS JOIN  @AddressesWithPostCodes GROUP BY [address_ID] ) WORK; RETURN END GO ——————————-end of the function————————   IF NOT EXISTS (SELECT * FROM sys.objects WHERE name LIKE ‘ExtractPostcode’)   BEGIN   RAISERROR (‘There was an error creating the function.’,16,1)   RETURN   END   /* now the job is only half done because we need to make sure that it works. So we now load our sample data, making sure that for each Sample, we have what we actually think the output should be. */ DECLARE @InputTable AddressesWithPostCodes INSERT INTO  @InputTable(Address_ID,TheAddress) VALUES(1,’14 Mason mews, Awkward Hill, Bibury, Cirencester, GL7 5NH’), (2,’5 Binney St      Abbey Ward    Buckinghamshire      HP11 2AX UK’), (3,‘BH6 3BE 8 Moor street, East Southbourne and Tuckton W     Bournemouth UK’), (4,’505 Exeter Rd,   DN36 5RP Hawerby cum BeesbyLincolnshire UK’), (5,”), (6,’9472 Lind St,    Desborough    Northamptonshire NN14 2GH  NN14 3GH UK’), (7,’7457 Cowl St, #70      Bargate Ward  Southampton   SO14 3TY UK’), (8,”’The Pippins”, 20 Gloucester Pl, Chirton Ward,   Tyne & Wear   NE29 7AD UK’), (9,’929 Augustine lane,    Staple Hill Ward     South Gloucestershire      BS16 4LL UK’), (10,’45 Bradfield road, Parwich   Derbyshire    DE6 1QN UK’), (11,’63A Northampton St,   Wilmington    Kent   DA2 7PP UK’), (12,’5 Hygeia avenue,      Loundsley Green WardDerbyshire    S40 4LY UK’), (13,’2150 Morley St,Dee Ward      Dumfries and Galloway      DG8 7DE UK’), (14,’24 Bolton St,   Broxburn, Uphall and Winchburg    West Lothian  EH52 5TL UK’), (15,’4 Forrest St,   Weston-Super-Mare    North Somerset       BS23 3HG UK’), (16,’89 Noon St,     Carbrooke     Norfolk       IP25 6JQ UK’), (17,’99 Guthrie St,  New Milton    Hampshire     BH25 5DF UK’), (18,’7 Richmond St,  Parkham       Devon  EX39 5DJ UK’), (19,’9165 laburnum St,     Darnall Ward  Yorkshire, South     S4 7WN UK’)   Declare @OutputTable  OutputFormat  –the table of what we think the correct results should be Declare @IncorrectRows OutputFormat –done for error reporting   –here is the table of what we think the output should be, along with a few edge cases. INSERT INTO  @OutputTable(Address_ID,TheAddress, ThePostcode)     VALUES         (1, ’14 Mason mews, Awkward Hill, Bibury, Cirencester, ‘,‘GL7 5NH’),         (2, ’5 Binney St   Abbey Ward    Buckinghamshire      UK’,‘HP11 2AX’),         (3, ’8 Moor street, East Southbourne and Tuckton W    Bournemouth UK’,‘BH6 3BE’),         (4, ’505 Exeter Rd,Hawerby cum Beesby   Lincolnshire UK’,‘DN36 5RP’),         (5, ”,”),         (6, ’9472 Lind St,Desborough    Northamptonshire NN14 3GH UK’,‘NN14 2GH’),         (7, ’7457 Cowl St, #70    Bargate Ward  Southampton   UK’,‘SO14 3TY’),         (8, ”’The Pippins”, 20 Gloucester Pl, Chirton Ward,Tyne & Wear   UK’,‘NE29 7AD’),         (9, ’929 Augustine lane,  Staple Hill Ward     South Gloucestershire      UK’,‘BS16 4LL’),         (10, ’45 Bradfield road, ParwichDerbyshire    UK’,‘DE6 1QN’),         (11, ’63A Northampton St,Wilmington    Kent   UK’,‘DA2 7PP’),         (12, ’5 Hygeia avenue,    Loundsley Green WardDerbyshire    UK’,‘S40 4LY’),         (13, ’2150 Morley St,     Dee Ward      Dumfries and Galloway      UK’,‘DG8 7DE’),         (14, ’24 Bolton St,Broxburn, Uphall and Winchburg    West Lothian  UK’,‘EH52 5TL’),         (15, ’4 Forrest St,Weston-Super-Mare    North Somerset       UK’,‘BS23 3HG’),         (16, ’89 Noon St,  Carbrooke     Norfolk       UK’,‘IP25 6JQ’),         (17, ’99 Guthrie St,      New Milton    Hampshire     UK’,‘BH25 5DF’),         (18, ’7 Richmond St,      Parkham       Devon  UK’,‘EX39 5DJ’),         (19, ’9165 laburnum St,   Darnall Ward  Yorkshire, South     UK’,‘S4 7WN’)       insert into @IncorrectRows(Address_ID,TheAddress, ThePostcode)        SELECT Address_ID,TheAddress,ThePostCode FROM dbo.ExtractPostcode(@InputTable)       EXCEPT     SELECT Address_ID,TheAddress,ThePostCode FROM @outputTable; If @@RowCount>0        Begin        PRINT ‘The following rows gave ‘;     SELECT Address_ID,TheAddress,ThePostCode FROM @IncorrectRows        RAISERROR (‘These rows gave unexpected results.’,16,1);     end   /* For tear-down, we drop the user defined table type */ IF EXISTS(SELECT * FROM sys.types WHERE name LIKE ‘OutputFormat’                                    and schema_name(schema_ID)=‘Dbo’)   DROP TYPE dbo.OutputFormat GO /* once this is working, the development work turns from a chore into a delight and one ends up hitting execute so much more often to catch mistakes as soon as possible. It also prevents a wildly-broken routine getting into a build! */

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • We Need More Migration!

    - by rickramsey
    source Eva Mendez says, "Oye chico, do you really want to keep your data in that tired legacy file system when it could be enjoying encryption, compression, deduplication, snapshots, remote replication and other benefits provided by ZFS in Oracle Solaris 11? It's really not that hard to cross over. If you know how." "I don't know how, me dices? Esta bien, papacito. Go to OTN. Take my word for it. They know how." <blushing> Aw shucks, Eva. Anything for you! </blushing> The Best Way to Migrate Data From Legacy File Systems to ZFS To migrate data from a legacy filesystem to ZFS in Oracle Solaris 11, you need to install the shadow-migration package and enable the shadowd service. Then follow the simple procedure described by Dominic Kay. How to Update to Oracle Solaris 11 Using the Image Packaging System Oracle Solaris 11.1 has been released. You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 How to use the Oracle Solaris 8 P2V (physical to virtual) Archiver tool, which comes with Oracle Solaris Legacy Containers, to migrate a physical Oracle Solaris 8 system with Oracle Database and an Oracle Automatic Storage Management file system into an Oracle Solaris 8 branded zone inside an Oracle Solaris 10 guest domain on top of an Oracle Solaris 11 control domain. - Ricardo Website Newsletter Facebook Twitter

    Read the article

  • What are some of the benefits of a "Micro-ORM"?

    - by Wayne M
    I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't have to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • How do I get a Broadcom BCM4311 working?

    - by Fer1805
    I'm having serious problems installing the broadcom drivers for ubuntu 11.04. It worked perfectly on my previous version, but now, it is impossible. I'm a user with no advance knowledge in linux, so I would need clear explanations on make, compile, etc. I was following the instructions on the following blog, with no luck. Broadcom BCM4311 Wireless not working Can someone help me? Edit: For the command: "lspci | grep Network", I get the following message: 06:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) For the command: iwconfig, i get the following: lo no wireless extensions. eth0 no wireless extensions. When i follow the following steps (from the above link), there are a NO error message at all: open the 'Synaptic Package Manager' and search for bcm uninstall the bcm-kernel-source package make sure that the firmware-b43-installer and the b43-fwcutter packages are installed type into terminal: cat /etc/modprobe.d/* | egrep '8180|acx|at76|ath|b43|bcm|CX|eth|ipw|irmware|isl|lbtf|orinoco|ndiswrapper|NPE|p54|prism|rtl|rt2|rt3|rt6|rt7|witch|wl' (you may want to copy this) and see if the term blacklist bcm43xx is there if it is, type cd /etc/modprobe.d/ and then sudo gedit blacklist.conf put a # in front of the line: blacklist bcm43xx then save the file (I was getting error messages in the terminal about not being able to save, but it actually did save properly). reboot 'End of procedure' Before (not ubuntu 11.04), if i wanted to connect wireles, i just went to the icon at the upper side of the screen, click, showed ALL the wireless network available, and done. Now, the only options i see are: Wired Network Auto Eth0 Disconnect VPN Enable networking Connection information Edit connection. hope above info is enough for your help.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • Java stored procedures in Oracle, a good idea?

    - by Scott A
    I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source). An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests). I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon). Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.

    Read the article

  • ecryptfs - decrypt and mount at boot with USB key

    - by Josh McGee
    I have a system running Ubuntu Server as a testbed for some services that I want to get familiar with. I decided to let the installation procedure set up encryption. I knew all along that I would have to decrypt it with the passphrase in order to get the system booted, but I assumed it wouldn't matter since it will only boot once or twice a month. However, my brother has informed me that he is a victim of power outages at the residence where this server is located. This means we have to explain to his girlfriend how to turn on the computer, attach a keyboard, connect a monitor (she just can't understand that she can type to the computer without a display, so whatever) and input the passphrase for us, while we are at work. I have arrived at the conclusion that I should just put together a USB key that can be plugged in before powering on the computer, to avoid all the trouble. Is this possible with ecryptfs? Is there a tutorial or simple list of instructions available so that I can knock this out and focus back on the stuff I care about? EDIT: I am aware that this is possible with LUKS and dm-crypt, but unfortunately the magical encryption that Ubuntu hands you during the installation is only ecryptfs so my question is specific to that.

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >