Search Results

Search found 770 results on 31 pages for 'fred jones'.

Page 29/31 | < Previous Page | 25 26 27 28 29 30 31  | Next Page >

  • Why can't I set boolean columns with update?

    - by Benjamin Oakes
    I'm making a user administration page. For the system I'm creating, users need to be approved. Sometimes, there will be many users to approve, so I'd like to make that easy. I'm storing this as a boolean column called approved. I remembered the Edit Multiple Individually Railscast and thought it would be a great fit. However, I'm running into problems which I traced back to ActiveRecord::Base#update. update works fine in this example: >> User.all.map(&:username) => ["ben", "fred"] >> h = {"1"=>{'username'=>'benjamin'}, "2"=>{"username"=>'frederick'}} => {"1"=>{"username"=>"benjamin"}, "2"=>{"username"=>"frederick"}} >> User.update(h.keys, h.values) => ... >> User.all.map(&:username) => ["benjamin", "frederick"] But not this one: >> User.all.map(&:approved) => [true, nil] >> h = {"1"=>{'approved'=>'1'}, "2"=>{'approved'=>'1'}} >> User.update(h.keys, h.values) => ... >> User.all.map(&:approved) => [true, nil] Chaging from '1' to true didn't make a difference when I tested. What am I doing wrong?

    Read the article

  • Flash As3 Mute Button problems

    - by Lee
    Hey guys, I am trying to create a UI movie clip that can be used across different scenes. It uses variables from the root scope to determine states. When i press the mute button is works fine, however when i try to un-mute things go weird. Sometimes it takes 2 clicks to unmute, sometimes more. It seems random. Muting however seems to work first time.. Any ideas? Main Timeline: var mute:Boolean = false; var playerName = "Fred"; function setMute(vol) { var sTransform:SoundTransform = new SoundTransform(1,0); sTransform.volume = vol; SoundMixer.soundTransform = sTransform; } function toggleMuteBtn(event:Event) { if (mute) { // Sound On, Mute Off mute = false; setMute(1); ui_mc.muteCross_mc.visible = false; } else { // Sound Off, Mute On mute = true; setMute(0); ui_mc.muteCross_mc.visible = true; } } ui_mc Action Script: if (MovieClip(parent).mute == false) { muteCross_mc.visible = false; } mute_btn.addEventListener(MouseEvent.CLICK, MovieClip(parent).toggleMuteBtn);

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • Select XML nodes as rows

    - by Bjørn
    I am selecting from a table that has an XML column using T-SQL. I would like to select a certain type of node and have a row created for each one. For instance, suppose I am selecting from a people table. This table has an XML column for addresses. The XML is formated similar to the following: <address> <street>Street 1</street> <city>City 1</city> <state>State 1</state> <zipcode>Zip Code 1</zipcode> </address> <address> <street>Street 2</street> <city>City 2</city> <state>State 2</state> <zipcode>Zip Code 2</zipcode> </address> How can I get results like this: Name         City         State Joe Baker   Seattle      WA Joe Baker   Tacoma     WA Fred Jones  Vancouver BC

    Read the article

  • Looping through a SimpleXML object, or turning the whole thing into an array.

    - by Coffee Cup
    I'm trying to work out how to iterate though a returned SimpleXML object. I'm using a toolkit called Tarzan AWS, which connects to Amazon Web Services (SimpleDB, S3, EC2, etc). I'm specifically using SimpleDB. I can put data into the Amazon SimpleDB service, and I can get it back. I just don't know how to handle the SimpleXML object that is returned. The Tarzan AWS documentation says this: Look at the response to navigate through the headers and body of the response. Note that this is an object, not an array, and that the body is a SimpleXML object. Here's a sample of the returned SimpleXML object: [body] = SimpleXMLElement Object ( [QueryWithAttributesResult] = SimpleXMLElement Object ( [Item] = Array ( [0] = SimpleXMLElement Object ( [Name] = message12413344443260 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = john ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is a message. ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334444 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413344443260 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.1 ) ) ) [1] = SimpleXMLElement Object ( [Name] = message12413346907303 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = fred ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is another message ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334690 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413346907303 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.2 ) ) ) ) So what code do I need to get through each of the object items? I'd like to loop through each of them and handle it like a returned mySQL query. For example, I can query SimpleDB and then loop though the SimpleXML so I can display the results on the page. Alternatively, how do you turn the whole shebang into an array? I'm new to SimpleXML, so I apologise if my questions aren't specific enough.

    Read the article

  • QTableWidget::itemAt() returns seemingly random items

    - by Jordan Milne
    I've just started using Qt, so please bear with me. When I use QTableWidget-getItemAt(), it returns a different item from if I used currentItemChanged and clicked the same item. I believe it's necessary to use itemAt() since I need to get the first column of whatever row was clicked. Some example code is below: MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); QList<QString> rowContents; rowContents << "Foo" << "Bar" << "Baz" << "Qux" << "Quux" << "Corge" << "Grault" << "Garply" << "Waldo" << "Fred"; for(int i =0; i < 10; ++i) { ui->tableTest->insertRow(i); ui->tableTest->setItem(i, 0, new QTableWidgetItem(rowContents[i])); ui->tableTest->setItem(i, 1, new QTableWidgetItem(QString::number(i))); } } //... void MainWindow::on_tableTest_currentItemChanged(QTableWidgetItem* current, QTableWidgetItem* previous) { ui->lblColumn->setText(QString::number(current->column())); ui->lblRow->setText(QString::number(current->row())); ui->lblCurrentItem->setText(current->text()); ui->lblCurrentCell->setText(ui->tableTest->itemAt(current->column(), current->row())->text()); } For the item at 1x9, lblCurrentItem displays "9" (as it should,) whereas lblCurrentCell displays "Quux". Am I doing something wrong?

    Read the article

  • Javascript onclick() event bubbling - working or not?

    - by user1071914
    I have a table in which the table row tag is decorated with an onclick() handler. If the row is clicked anywhere, it will load another page. In one of the elements on the row, is an anchor tag which also leads to another page. The desired behavior is that if they click on the link, "delete.html" is loaded. If they click anywhere else in the row, "edit.html" is loaded. The problem is that sometimes (according to users) both the link and the onclick() are fired at once, leading to a problem in the back end code. They swear they are not double-clicking. I don't know enough about Javascript event bubbling, handling and whatever to even know where to start with this bizarre problem, so I'm asking for help. Here's a fragment of the rendered page, showing the row with the embedded link and associated script tag. Any suggestions are welcomed: <tr id="tableRow_3339_0" class="odd"> <td class="l"></td> <td>PENDING</td> <td>Yabba Dabba Doo</td> <td>Fred Flintstone</td> <td> <a href="/delete.html?requestId=3339"> <div class="deleteButtonIcon"></div> </a> </td> <td class="r"></td> </tr> <script type="text/javascript">document.getElementById("tableRow_3339_0").onclick = function(event) { window.location = '//edit.html?requestId=3339'; };</script>

    Read the article

  • Should we use p(..) or (*p)(..) when p is a function pointer?

    - by q0987
    Reference: [33.11] Can I convert a pointer-to-function to a void*? #include "stdafx.h" #include <iostream> int f(char x, int y) { return x; } int g(char x, int y) { return y; } typedef int(*FunctPtr)(char,int); int callit(FunctPtr p, char x, int y) // original { return p(x, y); } int callitB(FunctPtr p, char x, int y) // updated { return (*p)(x, y); } int _tmain(int argc, _TCHAR* argv[]) { FunctPtr p = g; // original std::cout << p('c', 'a') << std::endl; FunctPtr pB = &g; // updated std::cout << (*pB)('c', 'a') << std::endl; return 0; } Question Which way, the original or updated, is the recommended method? Thank you Although I do see the following usage in the original post: void baz() { FredMemFn p = &Fred::f; ? declare a member-function pointer ... }

    Read the article

  • How to process a large post array in PHP where item names are all different and not known in advance

    - by Salnajjar
    I have a PHP page that queries a DB to populate a form for the user to modify the data and submit. The query returns a number of rows which contain 3 items: ImageID ImageName ImageDescription The PHP page titles each box in the form with a generic name and appends the ImageID to it. Ie: ImageID_03 ImageName_34 ImageDescription_22 As it's unknown which images are going to have been retrieved from the DB then I can't know in advance what the name of the form entries will be. The form deals with a large number of entries at the same time. My backend PHP form processor that gets the data just sees it as one big array: [imageid_2] => 2 [imagename_2] => _MG_0214 [imageid_10] => 10 [imagename_10] => _MG_0419 [imageid_39] => 39 [imagename_39] => _MG_0420 [imageid_22] => 22 [imagename_22] => Curly Fern [imagedescription_2] => Wibble [imagedescription_10] => Wobble [imagedescription_39] => Fred [imagedescription_22] => Sally I've tried to do an array walk on it to split it into 3 arrays which set places but am stuck: // define empty arrays $imageidarray = array(); $imagenamearray = array(); $imagedescriptionarray = array(); // our function to call when we walk through the posted items array function assignvars($entry, $key) { if (preg_match("/imageid/i", $key)) { array_push($imageidarray, $entry); } elseif (preg_match("/imagename/i", $key)) { // echo " ImageName: $entry"; } elseif (preg_match("/imagedescription/i", $key)) { // echo " ImageDescription: $entry"; } } array_walk($_POST, 'assignvars'); This fails with the error: array_push(): First argument should be an array in... Am I approaching this wrong?

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • My new laptop - with a really nice battery option

    - by Rob Farley
    It was about time I got a new laptop, and so I made a phone-call to Dell to discuss my options. I decided not to get an SSD from them, because I’d rather choose one myself – the sales guy tells me that changing the HD doesn’t void my warranty, so that’s good (incidentally, I’d love to hear people’s recommendations for which SSD to get for my laptop). Unfortunately this machine only has one HD slot, but I figure that I’ll put lots of stuff onto external disks anyway. The machine I got was a Dell Studio XPS 16. It’s red (which suits my company), but also has the Intel® Core™ i7-820QM Processor, which is 4 Cores/8 Threads. Makes for a pretty Task Manager, but nothing like the one I saw at SQLBits last year (at 96 cores), or the one that my good friend James Rowland-Jones writes about here. But the reason for this post is actually something in the software that comes with the machine – you know, the stuff that most people uninstall at the earliest opportunity. I had just reinstalled the operating system, and was going through the utilities to get the drivers up-to-date, when I noticed that one of Dell applications included an option to disable battery charging. So I installed it. And sure enough, I can tell the battery not to charge now. Clearly Dell see it as a temporary option, and one that’s designed for when you’re on a plane. But for me, I most often use my laptop with the power plugged in, which means I don’t need to have my battery continually topping itself up. So I really love this option, but I feel like it could go a little further. I’d like “Not Charging” to be the default option, and let me set it when I want to charge it (which should theoretically make my battery last longer). I also intend to work out how this option works, so that I can script it and put it into my StartUp options (so it can be the Default setting). Actually – if someone has already worked this out and can tell me what it does, then please feel free to let me know. Even better would be an external switch. I had a switch on my old laptop (a Dell Latitude) for WiFi, so that I could turn that off before I turned on the computer (this laptop doesn’t give me that option – no physical switch for flight mode). I guess it just means I’ll get used to leaving the WiFi off by default, and turning it on when I want it – might save myself some battery power that way too. Soon I’ll need to take the plunge and sync my iPhone with the new laptop. I’m a little worried that I might lose something – Apple’s messages about how my stuff will be wiped and replaced with what’s on the PC doesn’t fill me with confidence, as it’s a new PC that doesn’t have stuff on it. But having a new machine is definitely a nice experience, and one that I can recommend. I’m sure when I get around to buying an SSD I’ll feel like it’s shiny and new all over again! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • The Oracle Retail Week Awards - most exciting awards yet?

    - by sarah.taylor(at)oracle.com
    Last night's annual Oracle Retail Week Awards saw the UK's top retailers come together to celebrate the very best of our industry over the last year.  The Grosvenor House Hotel on Park Lane in London was the setting for an exciting ceremony which this year marked several significant milestones in British - and global - retail.  Check out our videos about the event at our Oracle Retail YouTube channel, and see if you were snapped by our photographer on our Oracle Retail Facebook page. There were some extremely hot contests for many of this year's awards - and all very deserving winners.  The entries have demonstrated beyond doubt that retailers have striven to push their standards up yet again in all areas over the past year.  The judging panel includes some of the most prestigious names in the retail industry - to impress the panel enough to win an award is a substantial achievement.  This year the panel included the likes of Andy Clarke - Chief Executive of ASDA Group; Mark Newton Jones - CEO of Shop Direct Group; Richard Pennycook - the finance director at Morrisons; Rob Templeman - Chief Executive of Debenhams; and Stephen Sunnucks - the president of Gap Europe.  These are retail veterans  who have each helped to shape the British High Street over the last decade.  It was great to chat with many of them in the Oracle VIP area last night.  For me, last night's highlight was honouring both Sir Stuart Rose and Sir Terry Leahy for their contributions to the retail industry.  Both have set the standards in retailing over the last twenty years and taken their respective businesses from strength to strength, demonstrating that there is always a need for innovation even in larger businesses, and that a business has to adapt quickly to new technology in order to stay competitive.  Sir Terry Leahy's retirement this year marks the end of an era of global expansion for the Tesco group and a milestone in the progression of British retail.  Sir Terry has helped steer Tesco through nearly 20 years of change, with 14 years as Chief Executive.  During this time he led the drive for international expansion and an aggressive campaign to increase market share.  He has led the way for High Street retailers in adapting to the rise of internet retailing and nurtured a very successful home delivery service.  More recently he has pioneered the notion of cross-channel retailing with the introduction of Tesco apps for the iPhone and Android mobile phones allowing customers to scan barcodes of items to add to a shopping list which they can then either refer to in store or order for delivery.  John Lewis Partnership was a very deserving winner of The Oracle Retailer of the Year award for their overall dedication to excellent retailing practices.  The business was also named the American Express Marketing/Advertising Campaign of the Year award for their memorable 'Never Knowingly Undersold' advert series, which included a very successful viral video and radio campaign with Fyfe Dangerfield's cover of Billy Joel's 'She's Always a Woman' used for the adverts.  Store Design of the Year was another exciting category with Topshop taking the accolade for its flagship Oxford Street store in London, which combines boutique concession-style stalls with high fashion displays and exclusive collections from leading designers.  The store even has its own hairdressers and food hall, making it a truly all-inclusive fashion retail experience and a global landmark for any self-respecting international fashion shopper. Over the next few weeks we'll be exploring some of the winning entries in more detail here on the blog, so keep an eye out for some unique insights into how the winning retailers have made such remarkable achievements. 

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Social Targeting: This One's Just for You

    - by Mike Stiles
    Think of social targeting in terms of the archery competition we just saw in the Olympics. If someone loaded up 5 arrows and shot them straight up into the air all at once, hoping some would land near the target, the world would have united in laughter. But sadly for hysterical YouTube video viewing, that’s not what happened. The archers sought to maximize every arrow by zeroing in on the spot that would bring them the most points. Marketers have always sought to do the same. But they can only work with the tools that are available. A firm grasp of the desired target does little good if the ad products aren’t there to deliver that target. On the social side, both Facebook and Twitter have taken steps to enhance targeting for marketers. And why not? As the demand to monetize only goes up, they’re quite motivated to leverage and deliver their incredible user bases in ways that make economic sense for advertisers. You could target keywords on Twitter with promoted accounts, and get promoted tweets into search. They would surface for your followers and some users that Twitter thought were like them. Now you can go beyond keywords and target Twitter users based on 350 interests in 25 categories. How does a user wind up in one of these categories? Twitter looks at that user’s tweets, they look at whom they follow, and they run data through some sort of Twitter secret sauce. The result is, you have a much clearer shot at Twitter users who are most likely to welcome and be responsive to your tweets. And beyond the 350 interests, you can also create custom segments that find users who resemble followers of whatever Twitter handle you give it. That means you can now use boring tweets to sell like a madman, right? Not quite. This ad product is still quality-based, meaning if you’re not putting out tweets that lead to interest and thus, engagement, that tweet will earn a low quality score and wind up costing you more under Twitter’s auction system to maintain. That means, as the old knight in “Indiana Jones and the Last Crusade” cautions, “choose wisely” when targeting based on these interests and categories to make sure your interests truly do line up with theirs. On the Facebook side, they’re rolling out ad targeting that uses email addresses, phone numbers, game and app developers’ user ID’s, and eventually addresses for you bigger brands. Why? Because you marketers asked for it. Here you were with this amazing customer list but no way to reach those same customers should they be on Facebook. Now you can find and communicate with customers you gathered outside of social, and use Facebook to do it. Fair to say such users are a sensible target and will be responsive to your message since they’ve already bought something from you. And no you’re not giving your customer info to Facebook. They’ll use something called “hashing” to make sure you don’t see Facebook user data (beyond email, phone number, address, or user ID), and Facebook can’t see your customer data. The end result, social becomes far more workable and more valuable to marketers when it delivers on the promise that made it so exciting in the first place. That promise is the ability to move past casting wide nets to the masses and toward concentrating marketing dollars efficiently on the targets most likely to yield results.

    Read the article

  • Problem with Json Date format when calling cross-domain proxy

    - by Christo Fur
    I am using a proxy service to allow my client side javascript to talk to a service on another domain The proxy is a simple ashx file with simply gets the request and forwards it onto the service on the other domain : using (var sr = new System.IO.StreamReader(context.Request.InputStream)) { requestData = sr.ReadToEnd(); } string data = HttpUtility.UrlDecode(requestData); using (var client = new WebClient()) { client.BaseAddress = serviceUrl; client.Headers.Add("Content-Type", "application/json"); response = client.UploadString(new Uri(webserviceUrl), data); } The client javascript calling this proxy looks like this function TestMethod() { $.ajax({ type: "POST", url: "/custommodules/configuratorproxyservice.ashx?m=TestMethod", contentType: "application/json; charset=utf-8", data: JSON.parse('{"testObj":{"Name":"jo","Ref":"jones","LastModified":"\/Date(-62135596800000+0000)\/"}}'), dataType: "json", success: AjaxSucceeded, error: AjaxFailed }); function AjaxSucceeded(result) { alert(result); } function AjaxFailed(result) { alert(result.status + ' - ' + result.statusText); } } This works fine until I have to pass a date. At which point I get a Bad Request error when the proxy tries to call the service I did have this working at one point but have now lost it. Have tried using JSON.Parse on the object before sending. and JSON.Stringify, but no joy anyone got any ideas what I am missing

    Read the article

  • Why isn't my WPF Datagrid showing data?

    - by Edward Tanguay
    This walkthrough says you can create a WPF datagrid in one line but doesn't give a full example. So I created an example using a generic list and connected it to the WPF datagrid, but it doesn't show any data. What do I need to change on the code below to get it to show data in the datagrid? ANSWER: This code works now: XAML: <Window x:Class="TestDatagrid345.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:local="clr-namespace:TestDatagrid345" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <StackPanel> <toolkit:DataGrid ItemsSource="{Binding}"/> </StackPanel> </Window> Code Behind: using System.Collections.Generic; using System.Windows; namespace TestDatagrid345 { public partial class Window1 : Window { private List<Customer> _customers = new List<Customer>(); public List<Customer> Customers { get { return _customers; }} public Window1() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { DataContext = Customers; Customers.Add(new Customer { FirstName = "Tom", LastName = "Jones" }); Customers.Add(new Customer { FirstName = "Joe", LastName = "Thompson" }); Customers.Add(new Customer { FirstName = "Jill", LastName = "Smith" }); } } }

    Read the article

  • ASP.NET Ajax - Asynch request has separate session???

    - by Marcus King
    We are writing a search application that saves the search criteria to session state and executes the search inside of an asp.net updatepanel. Sometimes when we execute multiple searches successively the 2nd or 3rd search will sometimes return results from the first set of search criteria. Example: our first search we do a look up on "John Smith" - John Smith results are displayed. The second search we do a look up on "Bob Jones" - John Smith results are displayed. We save all of the search criteria in session state as I said, and read it from session state inside of the ajax request to format the DB query. When we put break points in VS everything behaves as normal, but without them we get the original search criteria and results. My guess is because they are saved in session, that the ajax request somehow gets its own session and saves the criteria to that, and then retrieves the criteria from that session every time, but the non-async stuff is able to see when the criteria is modified and saves the changes to state accordingly, but because they are from two different sessions there is a disparity in what is saved and read. EDIT::: To elaborate more, there was a suggestion of appending the search criteria to the query string which normally is good practice and I agree thats how it should be but following our requirements I don't see it as being viable. They want it so the user fills out the input controls hits search and there is no page reload, the only thing they see is a progress indicator on the page, and they still have the ability to navigate and use other features on the current page. If I were to add criteria to the query string I would have to do another request causing the whole page to load, which depending on the search criteria can take a really long time. This is why we are using an ajax call to perform the search and why we aren't causing another full page request..... I hope this clarifies the situation.

    Read the article

  • ASP.Net MVC 3 Full Name In DropDownList

    - by tgriffiths
    I am getting a bit confused with this and need a little help please. I am developing a ASP.Net MVC 3 Web application using Entity Framework 4.1. I have a DropDownList on one of my Razor Views, and I wish to display a list of Full Names, for example Tom Jones Michael Jackson James Brown In my Controller I retrieve a List of User Objects, then select the FirstName and LastName of each User, and pass the data to a SelectList. List<User> Requesters = _userService.GetAllUsersByTypeIDOrgID(46, user.organisationID.Value).ToList(); var RequesterNames = from r in Requesters let person = new { UserID = r.userID, FullName = new { r.firstName, r.lastName } } orderby person.FullName ascending select person; viewModel.RequestersList = new SelectList(RequesterNames, "UserID", "FullName"); return View(viewModel); In my Razor View I have the following @Html.DropDownListFor(model => model.requesterID, Model.RequestersList, "Select", new { @class = "inpt_a"}) @Html.ValidationMessageFor(model => model.requesterID) However, when I run the code I get the following error At least one object must implement IComparable. I feel as if I am going about this the wrong way, so could someone please help with this? Thanks.

    Read the article

  • Use jquery to create a multidimensional array

    - by Simon M White
    I'd like to use jquery and a multidemensional array to show a random quote plus the name of the individual who wrote it as a separate item. I'll then be able to use css to style them differently. The quote will change upon page refresh. So far i have this code which combines the quote and the name and person who wrote it: $(document).ready(function(){ var myQuotes = new Array(); myQuotes[0] = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec in tortor mauris. Peter Jones, Dragons Den"; myQuotes[1] = "Curabitur interdum, nibh et fringilla facilisis, lacus ipsum pulvinar mauris, eu facilisis justo arcu eget diam. Duis id sagittis elit. Theo Pathetis, Dragons Den"; myQuotes[2] = "Vivamus purus purus, tincidunt et porttitor et, euismod sit amet urna. Etiam sollicitudin eros nec metus pretium scelerisque. James Caan, Dragons Den"; var myRandom = Math.floor(Math.random()*myQuotes.length); $('.quote-holder blockquote span').html(myQuotes[myRandom]); }); any help would be greatly appreciated.

    Read the article

  • Create a HTML table from nested maps (and vectors)

    - by Kenny164
    I'm trying to create a table (a work schedule) I have coded previously using python, I think it would be a nice introduction to the Clojure language for me. I have very little experience in Clojure (or lisp in that matter) and I've done my rounds in google and a good bit of trial and error but can't seem to get my head around this style of coding. Here is my sample data (will be coming from an sqlite database in the future): (def smpl2 (ref {"Salaried" [{"John Doe" ["12:00-20:00" nil nil nil "11:00-19:00"]} {"Mary Jane" [nil "12:00-20:00" nil nil nil "11:00-19:00"]}] "Shift Manager" [{"Peter Simpson" ["12:00-20:00" nil nil nil "11:00-19:00"]} {"Joe Jones" [nil "12:00-20:00" nil nil nil "11:00-19:00"]}] "Other" [{"Super Man" ["07:00-16:00" "07:00-16:00" "07:00-16:00" "07:00-16:00" "07:00-16:00"]}]})) I was trying to step through this originally using for then moving onto doseq and finally domap (which seems more successful) and dumping the contents into a html table (my original python program outputed this from a sqlite database into an excel spreadsheet using COM). Here is my attempt (the create-table fn): (defn html-doc [title & body] (html (doctype "xhtml/transitional") [:html [:head [:title title]] [:body body]])) (defn create-table [] [:h1 "Schedule"] [:hr] [:table (:style "border: 0; width: 90%") [:th "Name"][:th "Mon"][:th "Tue"][:th "Wed"] [:th "Thur"][:th "Fri"][:th "Sat"][:th "Sun"] [:tr (domap [ct @smpl2] [:tr [:td (key ct)] (domap [cl (val ct)] (domap [c cl] [:tr [:td (key c)]]))]) ]]) (defroutes tstr (GET "/" ((html-doc "Sample" create-table))) (ANY "*" 404)) That outputs the table with the sections (salaried, manager, etc) and the names in the sections, I just feel like I'm abusing the domap by nesting it too many times as I'll probably need to add more domaps just to get the shift times in their proper columns and the code is getting a 'dirty' feel to it. I apologize in advance if I'm not including enough information, I don't normally ask for help on coding, also this is my 1st SO question :). If you know any better approaches to do this or even tips or tricks I should know as a newbie, they are definitely welcome. Thanks.

    Read the article

  • Why is the 'Named Domain Class' tool missing in the DSL Designer category in the toolbox ?

    - by Gishu
    I have the Domain-Specific development with VS DSL Tools book by Cook, Jones, et.all The book and various tutorials online mention a NamedDomainClass tool that should be present in the DSL Designer toolbox. I have installed VS 2010 beta 2 on Win XP - however this tool is missing in the toolbox. I've created a project using the Minimal project template as mentioned in the book. I have 12 tools showing up including the Domain Class tool. I've searched online and apparently no one else has this problem. Can someone confirm that it's missing in VS 2010 Beta 2? If not how can I get it to show up ? Is there any way in which I can add a Domain class instance and tweak it so that it becomes a Named Domain Class? The book mentions that there is some must-be-unique validation and serialization changes that are done by the NamedDomainClass tool. I've tried 'Choose Items' context menu on the DSL Designer category. These tools apparently are added dynamically ; do not show up in the lists on the dialog that comes up.

    Read the article

< Previous Page | 25 26 27 28 29 30 31  | Next Page >