Search Results

Search found 65721 results on 2629 pages for 'set options'.

Page 11/2629 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

  • Deleting elements from stl set while iterating through it does not invalidate the iterators.

    - by pedromanoel
    I need to go through a set and remove elements that meet a predefined criteria. This is the test code I wrote: #include <set> #include <algorithm> void printElement(int value) { std::cout << value << " "; } int main() { int initNum[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; std::set<int> numbers(initNum, initNum + 10); // print '0 1 2 3 4 5 6 7 8 9' std::for_each(numbers.begin(), numbers.end(), printElement); std::set<int>::iterator it = numbers.begin(); // iterate through the set and erase all even numbers for (; it != numbers.end(); ++it) { int n = *it; if (n % 2 == 0) { // wouldn't invalidate the iterator? numbers.erase(it); } } // print '1 3 5 7 9' std::for_each(numbers.begin(), numbers.end(), printElement); return 0; } At first, I thought that erasing an element from the set while iterating through it would invalidate the iterator, and the increment at the for loop would have undefined behavior. Even though, I executed this test code and all went well, and I can't explain why. My question: Is this the defined behavior for std sets or is this implementation specific? I am using gcc 4.3.3 on ubuntu 10.04 (32-bit version), by the way. Thanks!

    Read the article

  • Linq to sql DataContext cannot set load options after results been returned

    - by David Liddle
    I have two tables A and B with a one-to-many relationship respectively. On some pages I would like to get a list of A objects only. On other pages I would like to load A with objects in B attached. This can be handled by setting the load options DataLoadOptions options = new DataLoadOptions(); options.LoadWith<A>(a => a.B); dataContext.LoadOptions = options; The trouble occurs when I first of all view all A's with load options, then go to edit a single A (do not use load options), and after edit return to the previous page. I understand why the error is occurring but not sure how to best get round this problem. I would like the DataContext to be loaded up per request. I thought I was achieving this by using StructureMap to load up my DataContext on a per request basis. This is all part of an n-tier application where my Controllers call Services which in turn call Repositories. ForRequestedType<MyDataContext>() .CacheBy(InstanceScope.PerRequest) .TheDefault.Is.Object(new MyDataContext()); ForRequestedType<IAService>() .TheDefault.Is.OfConcreteType<AService>(); ForRequestedType<IARepository>() .TheDefault.Is.OfConcreteType<ARepository>(); Here is a brief outline of my Repository public class ARepository : IARepository { private MyDataContext db; public ARepository(MyDataContext context) { db = context; } public void SetLoadOptions(DataLoadOptions options) { db.LoadOptions = options; } public IQueryable<A> Get() { return from a in db.A select a; } So my ServiceLayer, on View All, sets the load options and then gets all A's. On editing A my ServiceLayer should spin up a new DataContext and just fetch a list of A's. When sql profiling, I can see that when I go to the Edit page it is requesting A with B objects.

    Read the article

  • SIP UAS asks for OPTIONS

    - by TacB0sS
    Hey, I have UAC that registers to a UAS, after registration the UAS sends me an OPTIONS request, what should I answer it? only the audio media streams? Update I: Allow me to explain myself better... if I want to invite someone to a session I USE the INVITE method and negotiate the media then, for that specific session. But once I register to the server, and it asks me for OPTIONS, then what should I supply, everything my client supports? once I answer it would it deduce that every INVITE I would request from now on would use these medias? or would I need to supply new media with every request? Update II: Hi Wiz, I was in the process of building a negotiation system, so i tried it out and replied the UAS here is the sort dialog we had: OPTIONS sip:[email protected] SIP/2.0 Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> Contact: <sip:[email protected]> Call-ID: [email protected] CSeq: 102 OPTIONS User-Agent: Freeswitch 1.2.3 Max-Forwards: 70 Date: Sat, 05 Jun 2010 12:06:43 GMT Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Content-Length: 0 OPTIONS In Response To 102: SIP/2.0 200 OK Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> CSeq: 102 OPTIONS Call-ID: [email protected] Allow: INVITE,CANCEL,ACK,BYE,OPTIONS Content-Type: application/sdp Content-Length: 248 v=0 o=310 4515233118481497946 4515233118481497946 IN IP4 10.0.0.1 s=- i=Nu-Art Software - TacB0sS VoIP information c=IN IP4 10.0.0.1 m=audio 40000 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 This response caused the server to stop sending me the options request, does this means I can only use these parameters with the server now? or as you said, it does not matter? Thanks, Adam.

    Read the article

  • Creating Property Set Expression Trees In A Developer Friendly Way

    - by Paulo Morgado
    In a previous post I showed how to create expression trees to set properties on an object. The way I did it was not very developer friendly. It involved explicitly creating the necessary expressions because the compiler won’t generate expression trees with property or field set expressions. Recently someone contacted me the help develop some kind of command pattern framework that used developer friendly lambdas to generate property set expression trees. Simply putting, given this entity class: public class Person { public string Name { get; set; } } The person in question wanted to write code like this: var et = Set((Person p) => p.Name = "me"); Where et is the expression tree that represents the property assignment. So, if we can’t do this, let’s try the next best thing that is splitting retrieving the property information from the retrieving the value to assign o the property: var et = Set((Person p) => p.Name, () => "me"); And this is something that the compiler can handle. The implementation of Set receives an expression to retrieve the property information from and another expression the retrieve the value to assign to the property: public static Expression<Action<TEntity>> Set<TEntity, TValue>( Expression<Func<TEntity, TValue>> propertyGetExpression, Expression<Func<TValue>> valueExpression) The implementation of this method gets the property information form the body of the property get expression (propertyGetExpression) and the value expression (valueExpression) to build an assign expression and builds a lambda expression using the same parameter of the property get expression as its parameter: public static Expression<Action<TEntity>> Set<TEntity, TValue>( Expression<Func<TEntity, TValue>> propertyGetExpression, Expression<Func<TValue>> valueExpression) { var entityParameterExpression = (ParameterExpression)(((MemberExpression)(propertyGetExpression.Body)).Expression); return Expression.Lambda<Action<TEntity>>( Expression.Assign(propertyGetExpression.Body, valueExpression.Body), entityParameterExpression); } And now we can use the expression to translate to another context or just compile and use it: var et = Set((Person p) => p.Name, () => name); Console.WriteLine(person.Name); // Prints: p => (p.Name = “me”) var d = et.Compile(); d(person); Console.WriteLine(person.Name); // Prints: me It can even support closures: var et = Set((Person p) => p.Name, () => name); Console.WriteLine(person.Name); // Prints: p => (p.Name = value(<>c__DisplayClass0).name) var d = et.Compile(); name = "me"; d(person); Console.WriteLine(person.Name); // Prints: me name = "you"; d(person); Console.WriteLine(person.Name); // Prints: you Not so useful in the intended scenario (but still possible) is building an expression tree that receives the value to assign to the property as a parameter: public static Expression<Action<TEntity, TValue>> Set<TEntity, TValue>(Expression<Func<TEntity, TValue>> propertyGetExpression) { var entityParameterExpression = (ParameterExpression)(((MemberExpression)(propertyGetExpression.Body)).Expression); var valueParameterExpression = Expression.Parameter(typeof(TValue)); return Expression.Lambda<Action<TEntity, TValue>>( Expression.Assign(propertyGetExpression.Body, valueParameterExpression), entityParameterExpression, valueParameterExpression); } This new expression can be used like this: var et = Set((Person p) => p.Name); Console.WriteLine(person.Name); // Prints: (p, Param_0) => (p.Name = Param_0) var d = et.Compile(); d(person, "me"); Console.WriteLine(person.Name); // Prints: me d(person, "you"); Console.WriteLine(person.Name); // Prints: you The only caveat is that we need to be able to write code to read the property in order to write to it.

    Read the article

  • Running a Mongo Replica Set on Azure VM Roles

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/15/running-a-mongo-replica-set-on-azure-vm-roles.aspxSetting up a MongoDB Replica Set with a bunch of Azure VMs is straightforward stuff. Here’s a step-by-step which gets you from 0 to fully-redundant 3-node document database in about 30 minutes (most of which will be spent waiting for VMs to fire up). First, create yourself 3 VM roles, which is the minimum number of nodes you need for high availability. You can use any OS that Mongo supports. This guide uses Windows but the only difference will be the mechanism for starting the Mongo service when the VM starts (Windows Service, daemon etc.) While the VMs are provisioning, download and install Mongo locally, so you can set up the replica set with the Mongo shell. We’ll create our replica set from scratch, doing one machine at a time (if you have a single node you want to upgrade to a replica set, it’s the same from step 3 onwards): 1. Setup Mongo Log into the first node, download mongo and unzip it to C:. Rename the folder to remove the version – so you have c:\MongoDB\bin etc. – and create a new folder for the logs, c:\MongoDB\logs. 2. Setup your data disk When you initialize a node in a replica set, Mongo pre-allocates a whole chunk of storage to use for data replication. It will use up to 5% of your data disk, so if you use a Windows VM image with a defsault 120Gb disk and host your data on C:, then Mongo will allocate 6Gb for replication. And that takes a while. Instead you can create yourself a new partition by shrinking down the C: drive in Computer Management, by say 10Gb, and then creating a new logical disk for your data from that spare 10Gb, which will be allocated as E:. Create a new folder, e:\data. 3. Start Mongo When that’s done, start a command line, point to the mongo binaries folder, install Mongo as a Windows Service, running in replica set mode, and start the service: cd c:\mongodb\bin mongod -logpath c:\mongodb\logs\mongod.log -dbpath e:\data -replSet TheReplicaSet –install net start mongodb 4. Open the ports Mongo uses port 27017 by default, so you need to allow access in the machine and in Azure. In the VM, open Windows Firewall and create a new inbound rule to allow access via port 27017. Then in the Azure Management Console for the VM role, under the Configure tab add a new rule, again to allow port 27017. 5. Initialise the replica set Start up your local mongo shell, connecting to your Azure VM, and initiate the replica set: c:\mongodb\bin\mongo sc-xyz-db1.cloudapp.net rs.initiate() This is the bit where the new node (at this point the only node) allocates its replication files, so if your data disk is large, this can take a long time (if you’re using the default C: drive with 120Gb, it may take so long that rs.initiate() never responds. If you’re sat waiting more than 20 minutes, start another instance of the mongo shell pointing to the same machine to check on it). Run rs.conf() and you should see one node configured. 6. Fix the host name for the primary – *don’t miss this one* For the first node in the replica set, Mongo on Windows doesn’t populate the full machine name. Run rs.conf() and the name of the primary is sc-xyz-db1, which isn’t accessible to the outside world. The replica set configuration needs the full DNS name of every node, so you need to manually rename it in your shell, which you can do like this: cfg = rs.conf() cfg.members[0].host = ‘sc-xyz-db1.cloudapp.net:27017’ rs.reconfig(cfg) When that returns, rs.conf() will have your full DNS name for the primary, and the other nodes will be able to connect. At this point you have a working database, so you can start adding documents, but there’s no replication yet. 7. Add more nodes For the next two VMs, follow steps 1 through to 4, which will give you a working Mongo database on each node, which you can add to the replica set from the shell with rs.add(), using the full DNS name of the new node and the port you’re using: rs.add(‘sc-xyz-db2.cloudapp.net:27017’) Run rs.status() and you’ll see your new node in STARTUP2 state, which means its initializing and replicating from the PRIMARY. Repeat for your third node: rs.add(‘sc-xyz-db3.cloudapp.net:27017’) When all nodes are finished initializing, you will have a PRIMARY and two SECONDARY nodes showing in rs.status(). Now you have high availability, so you can happily stop db1, and one of the other nodes will become the PRIMARY with no loss of data or service. Note – the process for AWS EC2 is exactly the same, but with one important difference. On the Azure Windows Server 2012 base image, the MongoDB release for 64-bit 2008R2+ works fine, but on the base 2012 AMI that release keeps failing with a UAC permission error. The standard 64-bit release is fine, but it lacks some optimizations that are in the 2008R2+ version.

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Options for different domain and hosting

    - by Carl
    The situation I have a hosting service (one.com) on which I have installed a wordpress.org site in a subdirectory 'wordpress': myhost.com/wordpress/ (myhost.com is actually my own domain, but it already has contents and I don't want wordpress/ to appear in the root of that domain.) I want to use a second domain for this site. Thinking I would be able to forward to the wordpress site without problems, I registered the domain at GoDaddy.com: mydomain.com What I want So when my visitors type in mydomaind.com, I want them to see the contents on myhost.com/wordpress/, and the same for all subpages (mydomain.com/a/subpage fetches from myhost.com/wordpress/a/subpage). Just a redirect isn't enough, I want my visitors to see only mydomain.com as their domain. Some notes If I set up forward with URL masking at GoDaddy, they just give a full frame, pointing to myhost.com/wordpress/. This isn't good enough for me, since mydomain.com will always show up in the adress bar, also for subpages (I want mydomain/a/subpage to show in the adress bar for a subpage). I believe this could in principle be done with a .htaccess file with URL rewriting, but I have no hosting with GoDaddy so I can't upload such a file there. Hosting with GoDaddy is very expensive (of course) so I don't want to do that. I don't think I can use DNS settings; the host of mydomain.com says they don't allow anyone else to point to their name servers. If possible, I wouldn't want to re-install the wordpress site, it would take quite some time. I'd prefer to keep it at myhost.com/wordpress/ (if possible) Anything involving transferring the domain is supposed to take 5-7 working days. I would need my site up-and-running earlier than that, so I'd like to avoid it if possible. Am I locked in? As it seems, I am rather locked-in with GoDaddy. I can't use the domain with .htaccess since I can't upload such a file (and won't pay for hosting by GoDaddy). I can't use any of their forward options since none of them do what I want (one just forwards, the one that masks the URL does it with frames). Would you agree? Possible solutions Transfer the domain to any hosting service with reasonable hosting pricing, as opposed to GoDaddy (I'd probably use one.com, the same host as for myhost.com, in that case), and there either re-install wordpress on the new account, or use .htaccess with URL rewrite on the new account to fetch the contents from myhost.com/wordpress/. Can this be set-up to work with sub-pages as well? And visitors won't ever see "myhost.com/wordpress", just "mydomain.com"? E.i., mydomain.com/a/subpage/ wold fetch from myhost.com/wordpress/a/subpage/? This might be a long shot but: Find some free (preferrably) hosting allowing to point to their nameservers Make DNS settings at GoDaddy so that my domain appears at the site above at that site, put a .htaccess file with URL rewriting to forward to myhost.com/wordpress/ Could this be possible? What services could I use in that case? As I see it, this would be the only way not to have to transfer a domain (taking 5-7 working days) and not having to re-install the wordpress site. Sorry for the long question. All info and ideas are welcome.

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • database independent coding framework options?

    - by statirasystems
    Background: I have not programmed in a while besides doing VBA and a little VB.NET. So please forgive my language use. I'm green and have a head cold. I am reading all I can now, but I have no programming circles to draw from. The information I am providing is to help guide you to what I am looking for. I am not confident I can ask the question properly. Story: I have four different projects that I am starting. Obviously I won't be working on all at the same time however they each will have similar needs and be inter related. They are as follows: Desktop Environment/System User Interface - basically a product that runs on major computers via mono or .net that unifies the look and functions. In the context of the up coming question it would be able to directly access data of various types. It would work in tandum with my office suite, system manager, and network application framework. Office Suite - technically it would not be a suite since I will be doing it from one interfacel except for the Communications Application. As far as the question, it will need to be able to link to various data sources for storing files and using, manipulating, and presenting information. System Manager - an intellegent system to manage and administer the entire network and all equipment. As far as the question, needs to be able to access data for archiving and and for accessing it's own settings stored in various formats, sql or xml. Network Application Framework - A complete system that can be used for ERP, CRM, CMS, Errata, File Management, and so on. As to the question to be able to access it's own or interlink with existing applications. Requirement: C#, Simplifies and reduces coding, use the same code to access diffent databases(ie MySQL, MS SQL, ACCESS, XML, ...), Mono would be nice but not a must, Question: What librarys, frameworks, or other options would be able to help with this? Is there a good resource to guide me? I don't want arguing over what is best, just information to help me further understand and make an educated decision.

    Read the article

  • MERGE gives better OUTPUT options

    - by Rob Farley
    MERGE is very cool. There are a ton of useful things about it – mostly around the fact that you can implement a ton of change against a table all at once. This is great for data warehousing, handling changes made to relational databases by applications, all kinds of things. One of the more subtle things about MERGE is the power of the OUTPUT clause. Useful for logging.   If you’re not familiar with the OUTPUT clause, you really should be – it basically makes your DML (INSERT/DELETE/UPDATE/MERGE) statement return data back to you. This is a great way of returning identity values from INSERT commands (so much better than SCOPE_IDENTITY() or the older (and worse) @@IDENTITY, because you can get lots of rows back). You can even use it to grab default values that are set using non-deterministic functions like NEWID() – things you couldn’t normally get back without running another query (or with a trigger, I guess, but that’s not pretty). That inserted table I referenced – that’s part of the ‘behind-the-scenes’ work that goes on with all DML changes. When you insert data, this internal table called inserted gets populated with rows, and then used to inflict the appropriate inserts on the various structures that store data (HoBTs – the Heaps or B-Trees used to store data as tables and indexes). When deleting, the deleted table gets populated. Updates get a matching row in both tables (although this doesn’t mean that an update is a delete followed by an inserted, it’s just the way it’s handled with these tables). These tables can be referenced by the OUTPUT clause, which can show you the before and after for any DML statement. Useful stuff. MERGE is slightly different though. With MERGE, you get a mix of entries. Your MERGE statement might be doing some INSERTs, some UPDATEs and some DELETEs. One of the most common examples of MERGE is to perform an UPSERT command, where data is updated if it already exists, or inserted if it’s new. And in a single operation too. Here, you can see the usefulness of the deleted and inserted tables, which clearly reflect the type of operation (but then again, MERGE lets you use an extra column called $action to show this). (Don’t worry about the fact that I turned on IDENTITY_INSERT, that’s just so that I could insert the values) One of the things I love about MERGE is that it feels almost cursor-like – the UPDATE bit feels like “WHERE CURRENT OF …”, and the INSERT bit feels like a single-row insert. And it is – but into the inserted and deleted tables. The operations to maintain the HoBTs are still done using the whole set of changes, which is very cool. And $action – very convenient. But as cool as $action is, that’s not the point of my post. If it were, I hope you’d all be disappointed, as you can’t really go near the MERGE statement without learning about it. The subtle thing that I love about MERGE with OUTPUT is that you can hook into more than just inserted and deleted. Did you notice in my earlier query that my source table had a ‘src’ field, that wasn’t used in the insert? Normally, this would be somewhat pointless to include in my source query. But with MERGE, I can put that in the OUTPUT clause. This is useful stuff, particularly when you’re needing to audit the changes. Suppose your query involved consolidating data from a number of sources, but you didn’t need to insert that into the actual table, just into a table for audit. This is now very doable, either using the INTO clause of OUTPUT, or surrounding the whole MERGE statement in brackets (parentheses if you’re American) and using a regular INSERT statement. This is also doable if you’re using MERGE to just do INSERTs. In case you hadn’t realised, you can use MERGE in place of an INSERT statement. It’s just like the UPSERT-style statement we’ve just seen, except that we want nothing to match. That’s easy to do, we just use ON 1=2. This is obviously more convoluted than a straight INSERT. And it’s slightly more effort for the database engine too. But, if you want the extra audit capabilities, the ability to hook into the other source columns is definitely useful. Oh, and before people ask if you can also hook into the target table’s columns... Yes, of course. That’s what deleted and inserted give you.

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • How to get correct Set-Cookie headers for NSHTTPURLResponse?

    - by overboming
    I want to use the following code to login to a website which returns its cookie information in the following manner: Set-Cookie: 19231234 Set-Cookie: u2am1342340 Set-Cookie: owwjera I'm using the following code to log in to the site, but the print statement at the end doesn't output anything about "set-cookie". On Snow leopard, the library seems to automatically pick up the cookie for this site and later connections sent out is set with correct "cookie" headers. But on leopard, it doesn't work that way, so is that a trigger for this "remember the cookie for certain root url" behavior? NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:uurl]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setValue:@"keep-live" forHTTPHeaderField:@"Connection"]; [request setValue:@"300" forHTTPHeaderField:@"Keep-Alive"]; [request setHTTPShouldHandleCookies:YES]; [request setHTTPBody:postData]; [request setTimeoutInterval:10.0]; NSData *urlData; NSHTTPURLResponse *response; NSError *error; urlData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSLog(@"response dictionary %@",[response allHeaderFields]);

    Read the article

  • Fastest way to put contents of Set<String> to a single String with words separated by a whitespace?

    - by Lars Andren
    I have a few Set<String>s and want to transform each of these into a single String where each element of the original Set is separated by a whitespace " ". A naive first approach is doing it like this Set<String> set_1; Set<String> set_2; StringBuilder builder = new StringBuilder(); for (String str : set_1) { builder.append(str).append(" "); } this.string_1 = builder.toString(); builder = new StringBuilder(); for (String str : set_2) { builder.append(str).append(" "); } this.string_2 = builder.toString(); Can anyone think of a faster, prettier or more efficient way to do this?

    Read the article

  • How to remove an element from set using Iterator?

    - by ankit
    I have a scenario that I am iterating over a set using iterator. Now I want to remove 1st element while my iterator is on 2nd element. How can I do it. I dont want to convert this set to list and using listIterator. I dont want to collect all objects to be removed in other set and call remove all sample code. Set<MyObject> mySet = new HashSet<MyObject>(); mySet.add(MyObject1); mySet.add(MyObject2); ... Iterator itr = mySet.iterator(); while(itr.hasNext()) { // Now iterator is at second element and I want to remove first element }

    Read the article

  • how to set different wallpapers in ubuntu workspaces

    - by Steve
    I'm having an issues trying to customize ubuntu workspaces in the gnome environment. Assuming the default four workspaces aka desktops, how can one have a different wallpaper for each one? When I go to an individual workspace to set its wallpaper, all of the workspaces use it. So if I set: wallpaper B on workspace 2 wallpaper C on workspace 3 What will happen is that all the workspaces will default to the last wallpaper set no matter which workspace it was set in. What's even weirder is that the very first wallpaper set upon using it for the very first time is what shows up when i call up the Workspaces tool. Even though once I settle upon a workspace, no matter which one, the original wallpaper disappears and the last wallpaper set is the one that always shows up.

    Read the article

  • Bin packing part 6: Further improvements

    - by Hugo Kornelis
    In part 5 of my series on the bin packing problem, I presented a method that sits somewhere in between the true row-by-row iterative characteristics of the first three parts and the truly set-based approach of the fourth part. I did use iteration, but each pass through the loop would use a set-based statement to process a lot of rows at once. Since that statement is fairly complex, I am sure that a single execution of it is far from cheap – but the algorithm used is efficient enough that the entire...(read more)

    Read the article

  • How to set a Static Route on a Storage Node

    - by csoto
    To set up a host route to an IP address, here are the procedures for BUI and CLI. You need to know the destination, mask, interface and network. Note that, in this case, the values are just examples. CLI - Log into CLI and run the commands below: configuration net routing create set family=IPv4 set destination=203.246.186.80 set mask=32 set gateway=192.168.100.230 set interface=igb0 commit BUI - Log in to the web ui of the ZFSSA NAS head - Click Configuration - Network - Routing - (+) - In the popup window that will be displayed, enter the values accordingly on the popup window shown on the screenshot below: Any of the two above procedures should get your desired route in place.

    Read the article

  • Bootstrap-Switch options don't take effect

    - by Linda Keating
    I'm using Bootstrap-Switch and the documentation says that options can be passed as an object on initialization. enter link description here And here is a list of options: enter link description here So my code looks like this: var options = { onText: "Yes", onColor: 'primary', offColor: 'danger', offText: "No", animate: true, }; $("[name='radioGroup1']").bootstrapSwitch(options); And it all switch works fine, but none of the defaults are overwritten by the options. Anybody got any examples of how this might work? Thanks

    Read the article

  • Taming XCode's auto-complete options

    - by Nippysaurus
    I am fairly new to XCode and the Objective-C language. When I am instantiating a class, for example an NSMutableArray, XCode will provide a whole lot of auto-complete options. Even for an empty class which simply extends an NSObject has many options, most of which seem completely useless. What is the reason for having so many auto-complete options, or can they be "tamed" in the preferences?

    Read the article

  • Over 200 active requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)"

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 230 requests from the 'localhost', like these: 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections.

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i.
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >