Search Results

Search found 31367 results on 1255 pages for 'table valued parameters'.

Page 357/1255 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Failed to convert parameter value from a Guid to a String

    - by user320460
    Hello, I am at the end of my knowledge and googled for the answer too but no luck :/ Week ago everything worked well. I did a revert on the repository, recreated the tableadapter etc... nothing helped. When I try to save in my application I get an SystemInvalidCastException at this point: PersonListDataSet.cs: partial class P_GroupTableAdapter { public int Update(PersonListDataSet.P_GroupDataTable dataTable, string userId) { this.Adapter.InsertCommand.Parameters["@userId"].Value = userId; this.Adapter.DeleteCommand.Parameters["@userId"].Value = userId; this.Adapter.UpdateCommand.Parameters["@userId"].Value = userId; return this.Update(dataTable); **<-- Exception occurs here** } } Everything is stuck here because a Guid - and I checked the datatable preview with the magnifier tool its really a true Guid in the column of the datatable - can not be converted to a string ??? How can that happen?

    Read the article

  • How can I implement the same behavior as Dictionary.TryGetValue

    - by pblasucci
    So, given then following code type MyClass () = let items = Dictionary<string,int>() do items.Add ("one",1) items.Add ("two",2) items.Add ("three",3) member this.TryGetValue (key,value) = items.TrygetValue (key,value) let c = MyClass () let d = Dictionary<string,int> () d.Add ("one",1) d.Add ("two",2) d.Add ("three",3) And the following test code let r1,v1 = d.TryGetValue "one" let r2,v2 = c.TryGetValue "one" The r1,v1 line works fine. The r2,v2 line bombs; complaining c.TryGetValue must be given a tuple. Interestingly, in each line the signature of TryGetValue is different. How can I get my custom implementation to exhibit the same behavior as the BCL version? Or, asked another way, since F# has (implicitly) the concept of tuple parameters, curried parameters, and BCL parameters, and I know how to distinguish between curried and tuple-style, how can I force the third style (a la BCL methods)? Let me know if this is unclear.

    Read the article

  • GEvent.addListener(...) return?

    - by user354436
    Hello, my Question is as follows: What does GEvent.addListener(map, "click" function(){...}) return into the callback function? I don't find any information in the GMaps reference at all, can you show me some? The only thing I found out was that there are two parameters, "overlay" and "latLng" that are passed. The name of these parameters should not be of interest right? I could also name them "foo" and "bar" as far as I know. But the parameter "overlay" seems to be empty anyway? Also I have problems passing these two parameters directly into a callback function I created myself which looks like that... GEvent.addListener(gmap, "click", generateMarker(overlay, latLng)); ... instead of writing the following, which actually works fine. GEvent.addListener(gmap, "click", function(overlay, latLng) { generateMarker(overlay, latLng); });

    Read the article

  • AIR SQLite IN expression not working

    - by goseta
    Hi, I'm having a problem with an expression in my sql statement in SQLite for Adobe AIR basically I have this sql = "UPDATE uniforms SET status=@status WHERE customerId IN(19,20)"; updateStmt.parameters["@status"] = args[1]; updateStmt.execute(); if I run the above code it works, updating the status when the id are 19 and 20 but if I pass the ids list as a parameter like this sql = "UPDATE uniforms SET status=@status WHERE customerId IN(@ids)"; updateStmt.parameters["@status"] = args[1]; updateStmt.parameters["@ids"] = "19,20"; updateStmt.execute(); it gives me and error, saying could not convert text value to numeric value, which make sense because I'm passing and string but the IN expression should convert it accordingly, like it does when I pass directly the list values, why is not working the other way, thanks for any help!

    Read the article

  • routing paramenter returns null when only supplying first paramenter in MVC

    - by Ray ForRespect
    My issue is that I customer Map Route in MVC which takes three parameters. When I supply all three or just two, the parameters are passed from the URL to my controller. However, when I only supply the first parameter, it is not passed and returns null. Not sure what causes this behavior. Route: routes.MapRoute( name: "Details", // Route name url: "{controller}/{action}/{param1}/{param2}/{param3}", // URL with parameters defaults: new { controller = "Details", action = "Index", param1 = UrlParameter.Optional, param2 = UrlParameter.Optional, param3 = UrlParameter.Optional } // Parameter defaults ); Controller: public ActionResult Map(string param1, string param2, string param3) { StoreMap makeMap = new StoreMap(); var storemap = makeMap.makeStoreMap(param1, param2, param3); var model = storemap; return View(model); } string param1 returns null when I navigate to: /StoreMap/Map/PARAM1NAME but it doesn't return null when I navigate to: /StoreMap/Map/PARAM1NAME/PARAM2NAME

    Read the article

  • subplot matplotlib wrong syntax

    - by madptr
    I am using matplotlib to subplot in a loop. For instance, i would like to subplot 49 data sets, and from the doc, i implemented it this way; import numpy as np import matplotlib.pyplot as plt X1=list(range(0,10000,1)) X1 = [ x/float(10) for x in X1 ] nb_mix = 2 parameters = [] for i in range(49): param = [] Y = [0] * len(X1) for j in range(nb_mix): mean = 5* (1 + (np.random.rand() * 2 - 1 ) * 0.5 ) var = 10* (1 + np.random.rand() * 2 - 1 ) scale = 5* ( 1 + (np.random.rand() * 2 - 1) * 0.5 ) Y = [ Y[k] + scale * np.exp(-((X1[k] - mean)/float(var))**2) for k in range(len(X1)) ] param = param + [[mean, var, scale]] ax = plt.subplot(7, 7, i + 1) ax.plot(X1, Y) parameters = parameters + [param] ax.show() However, i have an index out of range error from i=0 onwards. Where can i do better to have it works ? Thanks

    Read the article

  • Weird exception with OLEDB Parameter Insert

    - by Seamus MacKenzie
    Getting a strange error when trying to insert data into an Access database using parameters. the line where I am getting a problem is :- thisCommand.CommandText = "INSERT INTO Events (Venue_ID, Date_Start, Date_End, Time_Start, Time_End, Name, Description, Event_Type, Buy_Tickets_URL) VALUES (@VenID, @DStart, @DEnd, @evTime, @evTime, @Name, @Des, @EvType, @SysUrl);"; string desc = GetDesc(rec.EvName); thisCommand.Parameters.AddWithValue("@Des", desc); thisCommand.ExecuteNonQuery(); None of the other parameters cause a problem but when trying to insert data to the description field I get a database exception saying the field is too small to accept the amount of data. The problem is my program is only trying to insert 3 characters when it throws the error and the Description field is a memo so should be able to hold up to 65000+ characters. When inserting a value manually in the CommandText everything works fine so it must be something to do with the parameter properties.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Reusing an anonymous parameter in a prepared statement

    - by Chris Lieb
    I am customizing the insert SQL generated by hibernate and have hit an issue. When Hibernate generates the query by itself, it inserts data into the first two columns of the table, but this causes a database error since all four columns of the table are non-nullable. For the insert to be performed properly, it must insert the same data into two columns of the new record. This means that I need Hibernate to bind the same data to two different parameters in the query (prepared statement) that I am writing. Is there some SQL syntax that allows me to refer to anonymous parameters bound to a prepared statement in an order different from which they are bound? Details REF_USER_PAGE_XREF ---------------------------------------- PK FK1 | NETWORK_ID | VARCHAR2(100) PK FK1 | PAGE_PATH | VARCHAR2(1000) | USER_LAST_UPDT | VARCHAR2(100) | TMSP_LAST_UPDT | DATE insert into REF_USER_ROLE_XREF( NETWORK_ID, PAGE_PATH, TMSP_LAST_UPDT, USER_LAST_UPDT) values ( ?, /* want to insert the same data here */ ?, ?, /* and here */ (select to_char(sysdate, 'DD-MON-YY') from dual) I want to insert the same data into the first and third anonymous parameters.

    Read the article

  • Crystal Reports using ASP.Net

    - by Dattu
    I am new to the ASP.net and crystal reports.. I created a report using crystal reports and i used 3 parameters as Division,Month & Year. These parameters values will come from database using dropdown lists.. after selecting the parameters, report button clicked the report should come.. Report is coming but the values what ever I taken are not displaying in the report. Please give me solution for this and if code also provided it will be appreciated. Thanks in Advance. Dattu

    Read the article

  • AS3 Components in Flash Designer

    - by Jack Voight
    In an ActionScript 2 project I can create a new MovieClip, right-click on it on the library and select "Component Definition" to add parameters that can be referenced inside the MovieClip. This parameters can be easily changed in the MovieClips's properties. Now, I'm working on an ActionScript 3 project but haven't been able to figure out a way to obtain the values passed in those parameters. I defined a parameter named "textToDisplay" but when I write the following in the Actions for the first frame I get an error: trace(textToDisplay); This is the error: 1120: Access of undefined property textToDisplay. Do you know how to capture the value of that parameter? Thanks PS: I'm using Adobe Flash CS3 Professional on Windows XP

    Read the article

  • How can unit testing make parameter validation redundant?

    - by Johann Gerell
    We have a convention to validate all parameters of constructors and public functions/methods. For mandatory parameters of reference type, we mainly check for non-null and that's the chief validation in constructors, where we set up mandatory dependencies of the type. The number one reason why we do this is to catch that error early and not get a null reference exception a few hours down the line without knowing where or when the faulty parameter was introduced. As we start transitioning to more and more TDD, some team members feel the validation is redundant. Uncle Bob, who is a vocal advocate of TDD, strongly advices against doing parameter validation. His main argument seems to be "I have a suite of unit tests that makes sure everything works". But I can for the life of it just not see in what way unit tests can prevent our developers from calling these methods with bad parameters in production code. Please, unit testers out there, if you could explain this to me in a rational way with concrete examples, I'd be more than happy to seize this parameter validation!

    Read the article

  • Setting location based on previous parameter of $routeChangeError with AngularJS

    - by Moo
    I'm listening on events of the type $routeChangeError in a run block of my application. $rootScope.$on("$routeChangeError", function (event, current, previous, rejection) { if (!!previous) { console.log(previous); $location.path(previous.$$route.originalPath); } }); With the help of the previous parameter I would like to set the location to the previous page. This works as long as the "originalPath" of "previous.$$route" does not contain any parameters. If it contains parameters the "originalPath" is not transformed. Logging the previous objects returns the following output: $$route: Object ... originalPath: "/users/:id" regexp: /^\/users\/(?:([^\/]+)?)$/ ... params: Object id: "3" How can I set the location to the previous path including the parameters?

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • can we assign object value to the label?

    - by user334294
    I have a label for that i have to assign object value which is returned by stored procedure.my code as following object returnvalue; SqlConnection con = new SqlConnection("Data Source=vela21; Initial Catalog=MilkDb;Integrated Security=True"); con.Open(); string sa; sa = textBox1.Text; SqlCommand cmd = new SqlCommand("custname", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("Cid", SqlDbType.Int).Value = sa; cmd.Parameters.Add("cname", SqlDbType.NVarChar, 20); cmd.Parameters["cname"].Direction = ParameterDirection.Output; returnvalue = cmd.ExecuteNonQuery(); label3.Text = Convert.ToString(returnvalue); con.Close(); can anyone help me? plz........

    Read the article

  • Design report of 4-D data set

    - by phq
    I'm writing a report generator that will present data each being generated from 4 parameters. Time interval Group Measurement value(one of several to choose from) Device All these are orthogonal giving me a 4-D dataset to present. There are some simplifications where one parameter is the same for all and other parameters are merged. Still it appears as there are situations where all values are wanted on the report. In short the report should both be simple to overview and contain details. There will also be an interface where the user setup the range and granularity for each parameter. The most naive solution would be to have a 2D table where each cell contain another table with values of the remaining two dimensions. This is technically feasible but I'm worried that it would become hard to overview. Another approach is to present first two dimensions in a 2D table and the remaining parameters in groups Are there any good method to address this kind of issue?

    Read the article

  • Android camera being landscaped in some devices

    - by nala4ever
    Im new to Android and I tried a tutorial for camera API. The tutorial works fine. When I use HTC desire I can see the camera view in both portrait and landscape, but when I use Samsung Galaxy I get a the camera view only in a landscaped view. I tried the following code to rotate the camera view as well.. Camera.Parameters parameters = camera.getParameters(); parameters.setRotation(90); then the camera doesn't work as expected. (screen splits into 4 and not clear). Does anyone have an idea for this issue ? Thanks.

    Read the article

  • Do I need the text "_size" in the my.cnf file for mysql 5.1?

    - by chongman
    This is a pretty simple question about setting parameters in the my.cnf file for mysql 5.1. This page gives me the parameters I can tune: http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html and so I think I would need to write key_buffer_size = 256M But when I open my current my.cnf, it has the line: key_buffer = 16M My question is, do I need "key_buffer_size" or "key_buffer" or does it not matter which I use? And, how would I know if something in the my.cnf is incorrect? Where's the daemon start log file? I am running ubuntu; I think version 8.04 LTS

    Read the article

  • How can I see a variable defined in another php-file?

    - by Roman
    I use the same constant in all my php files. I do not want to assign the value of this variable in all my files. So, I wanted to create one "parameters.php" file and to the assignment there. Then in all other files I include the "parameters.php" and use variables defined in the "parameters.php". It was the idea but it does not work. I also tried to make the variable global. It also does not work. Is there a way to do what I want? Or may be there some alternative approach?

    Read the article

  • programming help

    - by user208639
    class Person holds personal data Its constructor receives 3 parameters, two Strings representing first and last names and an int representing age public Person(String firstName, String lastName, int age) { its method getName has no parameters and returns a String with format "Lastname, Firstname" its method getAge takes no parameters and returns an int representing the current age its method birthday increases age value by 1 and returns the new age value Create the class Person and paste the whole class into the textbox below public class Person { public Person(String first, String last, int age) { getName = "Lastname, Firstname"; System.out.print(last + first); getAge = age + 1; return getAge; System.out.print(getAge); birthday = age + 1; newAge = birthday; return newAge; } } im getting errors such as "cannot find symbol - variable getName" but when i declare a variable it still not working, i also wanted to ask if i am heading in the right direction or is it all totally wrong? im using a program called BlueJ to work on.

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • SQL SERVER – An Efficiency Tool to Compare and Synchronize SQL Server Databases

    - by Pinal Dave
    There is no need to reinvent the wheel if it is already invented and if the wheel is already available at ease, there is no need to wait to grab it. Here is the similar situation. I came across a very interesting situation and I had to look for an efficient tool which can make my life easier and solve my business problem. Here is the scenario. One of the developers had deleted few rows from the very important mapping table of our development server (thankfully, it was not the production server). Though it was a development server, the entire development team had to stop working as the application started to crash on every page. Think about the lost of manpower and efficiency which we started to loose.  Pretty much every department had to stop working as our internal development application stopped working. Thankfully, we even take a backup of our development server and we had access to full backup of the entire database at 6 AM morning. We do not take as a frequent backup of development server as production server (naturally!). Even though we had a full backup, the solution was not to restore the database. Think about it, there were plenty of the other operations since the last good full backup and if we restore a full backup, we will pretty much overwrite on the top of the work done by developers since morning. Now, as restoring the full backup was not an option we decided to restore the same database on another server. Once we had restored our database to another server, the challenge was to compare the table from where the database was deleted. The mapping table from where the data were deleted contained over 5000 rows and it was humanly impossible to compare both the tables manually. Finally we decided to use efficiency tool dbForge Data Compare for SQL Server from DevArt. dbForge Data Compare for SQL Server is a powerful, fast and easy to use SQL compare tool, capable of using native SQL Server backups as metadata source. (FYI we Downloaded dbForge Data Compare) Once we discovered the product, we immediately downloaded the product and installed on our development server. After we installed the product, we were greeted with the following screen. We clicked on the New Data Comparision to start our new comparison project. It brought up following screen. Here is the best part of the product, we just had to enter our database connection username and password along with source and destination details and we are done. The entire process is very simple and self intuiting. The best part was that for the source, we can either select database or even backup. This was indeed fantastic feature. Think about this, if you have a very big database, it will take long time to restore on the server. Once it is restored, you will be able to work with it. However, when you are working with dbForge Data Compare it will accept database backup as your source or destination. Once I click on the execute it brought up following screen where it displayed an excellent summary of the data compare. It has dedicated tabs for the what is changing in what table as well had details of the changed data. The best part is that, once we had reviewed the change. We click on the Synchronize button in the menu bar and it brought up following screen. You can see that the screen has very simple straight forward but very powerful features. You can generate a script to synchronize from target to source or even from source to target. Additionally, the database is a very complicated world and there are extensive options to configure various database options on the next screen. We also have the option to either generate script or directly execute the script to target server. I like to play on the safe side and I generated the script for my synchronization and later on after review I deployed the scripts on the server. Well, my team and we were able to get going from our disaster in less than 10 minutes. There were few people in our team were indeed disappointed as they were thinking of going home early that day but in less than 10 minutes they had to get back to work. There are so many other features in  dbForge Data Compare for SQL Server, I am already planning to make this product company wide recommended product for Data Compare tool. Hats off to the team who have build this product. Here are few of the features salient features of the dbForge Data Compare for SQL Server Perform SQL Server database comparison to detect changes Compare SQL Server backups with live databases Analyze data differences between two databases Synchronize two databases that went out of sync Restore data of a particular table from the backup Generate data comparison reports in Excel and HTML formats Copy look-up data from development database to production Automate routine data synchronization tasks with command-line interface Go Ahead and Download the dbForge Data Compare for SQL Server right away. It is always a good idea to get familiar with the important tools before hand instead of learning it under pressure of disaster. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >