Search Results

Search found 18489 results on 740 pages for 'which key'.

Page 506/740 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • python MySQLdb got invalid syntax when trying to INSERT INTO table

    - by Michelle Jun Lee
    ## COMMENT OUT below just for reference "" cursor.execute (""" CREATE TABLE yellowpages ( business_id BIGINT(20) NOT NULL AUTO_INCREMENT, categories_name VARCHAR(255), business_name VARCHAR(500) NOT NULL, business_address1 VARCHAR(500), business_city VARCHAR(255), business_state VARCHAR(255), business_zipcode VARCHAR(255), phone_number1 VARCHAR(255), website1 VARCHAR(1000), website2 VARCHAR(1000), created_date datetime, modified_date datetime, PRIMARY KEY(business_id) ) """) "" ## TOP COMMENT OUT (just for reference) ## code website1g = "http://www.triman.com" business_nameg = "Triman Sales Inc" business_address1g = "510 E Airline Way" business_cityg = "Gardena" business_stateg = "CA" business_zipcodeg = "90248" phone_number1g = "(310) 323-5410" phone_number2g = "" website2g = "" cursor.execute (""" INSERT INTO yellowpages(categories_name, business_name, business_address1, business_city, business_state, business_zipcode, phone_number1, website1, website2) VALUES ('%s','%s','%s','%s','%s','%s','%s','%s','%s') """, (''gas-stations'', business_nameg, business_address1g, business_cityg, business_stateg, business_zipcodeg, phone_number1g, website1g, website2g)) cursor.close() conn.close() I keep getting this error File "testdb.py", line 51 """, (''gas-stations'', business_nameg, business_address1g, business_cityg, business_stateg, business_zipcodeg, phone_number1g, website1g, website2g)) ^ SyntaxError: invalid syntax any idea why? By the way, the up arrow is pointing to website1g (the b character) . Thanks for the help in advance

    Read the article

  • Can I remove items from a ConcurrentDictionary from within an enumeration loop of that dictionary?

    - by the-locster
    So for example: ConcurrentDictionary<string,Payload> itemCache = GetItems(); foreach(KeyValuePair<string,Payload> kvPair in itemCache) { if(TestItemExpiry(kvPair.Value)) { // Remove expired item. Payload removedItem; itemCache.TryRemove(kvPair.Key, out removedItem); } } Obviously with an ordinary Dictionary this will throw an exception because removing items changes the dictionary's internal state during the life of the enumeration. It's my understanding that this is not the case for a ConcurrentDictionary as the provided IEnumerable handles internal state changing. Am I understanding this right? Is there a better pattern to use?

    Read the article

  • Choice of a deleted element at ListBox control

    - by Neir0
    Hi I have created a listbox control with following DataTemplate <DataTemplate x:Key="lb_Itemtemplate"> <DockPanel> <TextBlock Text="{Binding}" DockPanel.Dock="Left" /> <Button DockPanel.Dock="Right" Template="{StaticResource ButtonTemplate }" Width="20" Margin=" 0,1,1,10" >Delete </Button> <Button DockPanel.Dock="Right" Template="{StaticResource ButtonTemplate }" Width="20" Margin="0,1,1,10" >Highlight </Button> </DockPanel> </DataTemplate> <ListBox Name="listBox1" Grid.Column="0" Grid.Row="1" DataContext="{Binding ElementName=cb_fields, Path=SelectedItem}" ItemsSource="{Binding Path=PositiveXPathExpressions}" ItemTemplate="{StaticResource lb_Itemtemplate}" /> I want to delete element from "PositiveXPathExpressions" collection when user clicked on button "delete" but How i can decide which element i must to delete?

    Read the article

  • difference between signtool and sn or al for assembly signing

    - by sveerap
    Hi, I see tool like SN which generates private/public key pair for signing an assembly. and using AL tool we can assign a strong name to an assembly And we have also Sign tool which is used for signing the assembly (probably for using with certificates exclusively?). What is the exact difference between the two?. Is it sign tool have to be used when working with certificates and can it we acheive it SN?. or are they totally different.? Please help.

    Read the article

  • MS SQL Bridge Table Constraints

    - by greg
    Greetings - I have a table of Articles and a table of Categories. An Article can be used in many Categories, so I have created a table of ArticleCategories like this: BridgeID int (PK) ArticleID int CategoryID int Now, I want to create constraints/relationships such that the ArticleID-CategoryID combinations are unique AND that the IDs must exist in the respective primary key tables (Articles and Categories). I have tried using both VS2008 Server Explorer and Enterprise Manager (SQL-2005) to create the FK relationships, but the results always prevent Duplicate ArticleIDs in the bridge table, even though the CategoryID is different. I am pretty sure I am doing something obviously wrong, but I appear to have a mental block at this point. Can anyone tell me please how should this be done? Greaty appreciated!

    Read the article

  • insert and mofigy a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • Twitter Oauth Strategy with Warden + Devise Authentication Gems for Ruby

    - by Michael Waxman
    Devise, the authentication gem for Ruby based on Warden (another auth gem) does not support Twitter Oauth as an authentication strategy, BUT Warden does. There is a way to use the Warden Twitter Oauth strategy within Devise, but I cannot figure it out. I'm using the following block in the devise config file: config.warden do |manager| manager.oauth(:twitter) do |twitter| twitter.consumer_secret = <SECRET> twitter.consumer_key = <KEY> twitter.options :site => 'http://twitter.com' end manager.default_strategies.unshift :twitter_oauth end But I keep on getting all sorts of error messages. Does anyone know how to make this work? I'm assuming there is more to do here (configuring a new link/route to talk to Warden, maybe adding attributes to the Devise User model, etc.), but I can't figure out what they are. Please help.

    Read the article

  • Whats the KeyCode for overwriting a text in TextBox in winforms

    - by Ragha J
    I have a custom control which extends from TextBox. In the KeyDown event of the control I have access to the KeyCode property of keyEventArgs If the text in the textbox is selected and some other text is typed on top of it, the keyCodes that I am getting in the KeyDown event are different each time and in the KeyPress event I get the actual value. For ex: If the textbox has value 1234 and now I select 1234 and type 5 on top of it, I want to to be able to know in any of the events by some key combination that the old value 1234 is gone and the new value of the textbox is 5.

    Read the article

  • How to print css applied background images with the winforms webbrowser control

    - by Geir-Tore Lindsve
    I am using the webbrowser control in winforms and discovered now that background images which I apply with css are not included in the printouts. Is there a way to make the webbrowser print the background of the displayed document too? Edit: Since I wanted to do this programatically, I opted for this solution: using Microsoft.Win32; ... RegistryKey regKey = Registry.CurrentUser .OpenSubKey("Software") .OpenSubKey("Microsoft") .OpenSubKey("Internet Explorer") .OpenSubKey("Main"); //Get the current setting so that we can revert it after printjob var defaultValue = regKey.GetValue("Print_Background"); regKey.SetValue("Print_Background", "yes"); //Do the printing //Revert the registry key to the original value regKey.SetValue("Print_Background", defaultValue); Another way to handle this might be to just read the value, and notify the user to adjust this himself before printing. I have to agree that tweaking with the registry like this is not a good practice, so I am open for any suggestions. Thanks for all your feedback

    Read the article

  • Visual Studio Package for 2005/2008/2010 ??

    - by asp2go
    We are looking to turn an internal tool we have developed into a Visual Studio Package that we would sell to other developers. The tool will impact the custom editor and/or custom languages. Visual Studio 2010 has redesigned the API's heavily to simplify much of the work involved for these types of integration but the key question we have is: What is the typical adoption pace of new Visual Studio versions? Is there any information out there on adoption rates based on history? How many shops are still using 2005? This will help us to consider whether to target just 2010 using the new APIs or whether trying to go back and support 2008 (maybe 2005) and testing it forward.

    Read the article

  • Creating and Indexing Email Database using SqlCe

    - by Anindya Chatterjee
    I am creating a simple email client program. I am using MS SqlCe as a storage of emails. The database schema for storing the message is as follows: StorageId int IDENTITY NOT NULL PRIMARY KEY, FolderName nvarchar(255) NOT NULL, MessageId nvarchar(3999) NOT NULL, MessageDate datetime NOT NULL, StorageData ntext NULL In the StorageData field I am going to store the MIME message as byte array. But the problem arises when I am going to implement search on the stored messages. I have no idea how I am going to index the messages on top of this schema. Can anyone please help me in suggesting a good but simple schema, so that it will be effective in terms of storage space and search friendliness as well? Regards, Anindya Chatterjee

    Read the article

  • Dictionary<string,string> to Dictionary<Control,object> using IEnumerable<T>.Select()

    - by abatishchev
    I have a System.Collections.Generic.Dictionary<string, string> containing control ID and appropriate data column to data bind: var dic = new Dictionary<string, string> { { "Label1", "FooCount" }, { "Label2", "BarCount" } }; I use it that way: var row = ((DataRowView)FormView1.DataItem).Row; Dictionary<Control, object> newOne = dic.ToDictionary( k => FormView1.FindControl(k.Key)), k => row[k.Value]); So I'm using IEnumerable<T>.ToDictionary(Func<T>, Func<T>). Is it possbile to do the same using IEnumerable<T>.Select(Func<T>) ?

    Read the article

  • EMail Signing (Outlook) Using Smartcard Minidriver [Windows]

    - by Baget
    Hi I'm developing a Smart card Minidriver and I'm trying to Sign an Email using Outlook 2007. I have implemented all of the necessary functions in the minidriver. I'm able to create a "Smartcard User" certificate and save it and it's private key on the smartcard (using Microsoft Certificate Services via the Minidriver). When I try to sign an EMail via Outlook I'm getting Error Message (Internal Error), the last call to the minidriver is for ReadFile with "cmapfile" When I try to sign an EMail via Outlook with a difference non-smartcard certificate it's work fine. When I try to sign a Data using CryptoAPI (based on Windows SDK Sample) it's working fine. I'm using Windows 7. someone got any idea how to debug this issue? I tried to enable the CAPI2 eventlog, it don't give me any good information.

    Read the article

  • jpa IllegalArgumentException exception

    - by Bunny Rabbit
    i've three entity classes in my project public class Blobx { @ManyToOne private Userx user; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Key id; } public class Index { @Id private String keyword; @OneToMany(cascade = CascadeType.PERSIST) private List<Blobx> blobs; } public class Userx { @Id private String name; @OneToMany(mappedBy = "user") private List<Blobx>blobs; } while running the following lines app engine throws an exception em.getTransaction().begin(); em.persist(index); em.getTransaction().commit();//exception is thrown em.close(); as Caused by: java.lang.IllegalArgumentException: can't operate on multiple entity groups in a single transaction. found both Element { type: "Index" name: "function" } and Element { type: "Userx" name: "18580476422013912411" } i can't understand what's wrong?

    Read the article

  • L2E delete exception

    - by 5YrsLaterDBA
    I have following code to delete an user from database: try { var user = from u in db.Users where u.Username == username select u; if (user.Count() > 0) { db.DeleteObject(user.First()); db.SaveChanges(); } } but I got exception like this: at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) at System.Data.Objects.ObjectContext.SaveChanges(Boolean acceptChangesDuringSave) at System.Data.Objects.ObjectContext.SaveChanges() at MyCompany.SystemSoftware.DQMgr.User.DeleteUser(String username) in C:\workspace\SystemSoftware\SystemSoftware\src\dqm\User.cs:line 479 The Users table is referenced by few other tables. It is probably caused by the foreign key constraint?

    Read the article

  • Convert var to DataTable

    - by cre-johnny07
    I have var item which I want to convert in to a Datatable. How can I do this. var items = (from myObj in this.Context.Orders group myObj by myObj.OrderDate.ToString("yyyy-mm") into ymGroup select new { Date = ymGroup.Key, Start = ymGroup.Min(c => c.OrderId), End = ymGroup.Max(c => c.OrderId) }); I need to convert the items into a DataTable. I don't want to use any foreach loop. How can I do this.?

    Read the article

  • nodejs daemon wrong architecture

    - by Greg Pagendam-Turner
    I'm trying to run 'dali' a highcharts exporter from nodejs on my Mac under OSX Mountain Lion I'm getting the following error: module.js:485 process.dlopen(filename, module.exports); ^ Error: dlopen(/Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node, 1): no suitable image found. Did find: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node: mach-o, but wrong architecture at Object.Module._extensions..node (module.js:485:11) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Module.require (module.js:362:17) at require (module.js:378:17) at Object.<anonymous> (/Users/greg/node_modules/daemon/lib/daemon.js:12:11) at Module._compile (module.js:449:26) at Object.Module._extensions..js (module.js:467:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) The key part is: "wrong architecture" If I run: lipo -info /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node It returns: Non-fat file: /Users/greg/node_modules/daemon/lib/daemon.v0.8.8.node is architecture: i386 I'm guessing a x64 version is requried. How do I get and install the 64 bit version of this lib?

    Read the article

  • How to group a complex list of objects using LINQ?

    - by Daoming Yang
    I want to select and group the products, and rank them by the number of times they occur. For example, I have an OrderList each of order object has a OrderProductVariantList(OrderLineList), and each of OrderProductVariant object has ProductVariant, and then the ProductVariant object will have a Product object which contains product information. A friend helped me with the following code. It could be compiled, but it did not return any value/result. I used the watch window for the query and it gave me "The name 'query' does not exist in the current context". Can anyone help me? Many thanks. var query = orderList.SelectMany( o => o.OrderLineList ) // results in IEnumerable<OrderProductVariant> .Select( opv => opv.ProductVariant ) .Select( pv => p.Product ) .GroupBy( p => p ) .Select( g => new { Product = g.Key, Count = g.Count() });

    Read the article

  • Fine tune some SQL called multiple times

    - by Carl
    Hi all, I currently have an SQL query which is currently called multiple times like this (pseudo code): foreach(KeyValuePair kvp in someMapList) { select col1 from table where col2 = kvp.key and col3 = kvp.value; //do some processing on the returned value } The above could obviously call the database a large number of times if it is a big list of pairs. Can anyone think of a more efficient piece of SQL which could essentially return a list of specific values based on a list of two unique pieces of information so that the processing can be done in bulk? One obvious one would be to build up a big piece of SQL with ORs but this would probably be quite inefficient? Thanks Carl

    Read the article

  • DIV Overlaying DIV, links in backmost DIV non-accessible

    - by Shawn
    I have a photoshop image that is 500x600 that I am using as a background image for the first div. The second div sits over top of the first div and is populated with images. The z-index of the first div is set to 100. The effect is the photoshop image sits over top of all of the smaller images. The photoshop image is a letter with the inner-content of it set to be transparent and the small images create the filling. Each of the small images however is a link, but none of the links are accessible. How can I remedy that? I would post the code here, but I have absolutely no clue how to format it. I quite simply do not understand typing a backtick followed by the tab key and then a dollar sign. All that ends up is I type a backtick here followed by a dollar sign in the Tags section below.

    Read the article

  • Scanf with Signals

    - by jreid42
    I have a signal that blocks SIGINT and basically says "Sorry, you can't quit.\n" The issue is this can occur during a scanf. When this occurs during a scanf, scanf takes in the printf as input. How can I do a printf that will cause scanf to basically hit the enter key automatically. I don't care that I am getting bad input. I just want to programatically finish that scanf with a printf or something else. Process: scanf("get stuff") - User is able to enter stuff in. - SIGINT occurs and goes to my handler. - Handler says "Blah blah blah" to stdout. - Scanf has taken this blah blah blah and is waiting for more input. How do I make it so that when I return scanf is finished (don't care what it has gathered I just want it to continue without user help).

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • Assign binding path at runtime in WPF DataTemplate

    - by Abiel
    I am writing a WPF program in C# in which I have a ListView for which the columns will be populated at runtime. I would like to use a custom DataTemplate for the GridViewColumn objects in the ListView. In the examples I have seen where the number of columns is fixed in advance, a custom DataTemplate is often created using something like the XAML below. <DataTemplate x:Key="someKey"> <TextBlock Text="{Binding Path=FirstName}" /> </DataTemplate> This DataTemplate could also later be assigned to GridViewColumn.CellTemplate in the code-behind by calling FindResource("someKey"). However, this alone is of no use to me, because in this example the Path element is fixed to FirstName. Really I need something where I can set the Path in code. It is my impression that something along these lines may be possible if XamlReader is used, but I'm not sure how in practice I would do this. Any solutions are greatly appreciated.

    Read the article

  • what is the best way to generate fake data for classification problem ?

    - by Berkay
    i'm working on a project and i have a subset of user's key-stroke time data.This means that the user makes n attempts and i will use these recorded attempt time data in various kinds of classification algorithms for future user attempts to verify that the login process is done by the user or some another person. (Simply i can say that this is biometrics) I have 3 different times of the user login attempt process, ofcourse this is subset of the infinite data. until now it is an easy classification problem, i decided to use WEKA but as far as i understand i have to create some fake data to feed the classification algorithm. can i use some optimization algorithms ? or is there any way to create this fake data to get min false positives ? Thanks

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >