Search Results

Search found 5153 results on 207 pages for 'unique ptr'.

Page 162/207 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • XML: How to reprepresent objects with multiple occurences?

    - by savras
    hi, i need to save objects that can occur multiple times. each object is marked with unique identifier. when it is serialized first time all its properties are written. after that only references are used. <actionHistory> <add> <figure id="1" xsi:type="point"> <position x="1" y="2" /> </figure> </add> <change> <target ref="1" /> <property>x</property> <value>3</value> </change> </actionHistory> element 'target' only references to point saved before, but it can contain definition of new figure as well. there is also figure class hierarchy involved. is there any way to express it using xml-schema? any suggestions how to improve code above will be also appreciated.

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Using for or while loops

    - by Gary
    Every month, 4 or 5 text files are created. The data in the files is pulled into MS Access and used in a mailmerge. Each file contains a header. This is an example: HEADER|0000000130|0000527350|0000171250|0000058000|0000756600|0000814753|0000819455|100106 The 2nd field is the number of records contained in the file (excluding the header line). The last field is the date in the form yymmdd. Using gawk (for Windows), I've done ok with rearranging/modifying the data and writing it all out to a new file for importing into Access except for the following. I'm trying to create a unique ID number for each record. The ID number has the form 1mmddyyXXXX, where XXXX is a number, padded with leading zeros. Using the header above, the first record in the output file would get the ID number 10106100001 and the last record would get the ID 10106100130. I've tried putting the second field in the header into a variable, rearranging the last header field into the required date format and then looping with "for" statements to append the XXXX part of the ID and then outputting it all with printf but so far I've been complete rubbish at it. Thanks for your help! gary

    Read the article

  • Lucene complex structure search

    - by archer
    Basically I do have pretty simple database that I'd like to index with Lucene. Domains are: // Person domain class Person { Set<Pair> keys; } // Pair domain class Pair { KeyItem keyItem; String value; } // KeyItem domain, name is unique field within the DB (!!) class KeyItem{ String name; } I've tens of millions of profiles and hundreds of millions of Pairs, however, since most of KeyItem's "name" fields duplicates, there are only few dozens KeyItem instances. Came up to that structure to save on KeyItem instances. Basically any Profile with any fields could be saved into that structure. Lets say we've profile with properties - name: Andrew Morton - eduction: University of New South Wales, - country: Australia, - occupation: Linux programmer. To store it, we'll have single Profile instance, 4 KeyItem instances: name, education,country and occupation, and 4 Pair instances with values: "Andrew Morton", "University of New South Wales", "Australia" and "Linux Programmer". All other profile will reference (all or some) same instances of KeyItem: name, education, country and occupation. My question is, how to index all of that so I can search for Profile for some particular values of KeyItem::name and Pair::value. Ideally I'd like that kind of query to work: name:Andrew* AND occupation:Linux* Should I create custom Indexer and Searcher? Or I could use standard ones and just map KeyItem and Pair as Lucene components somehow?

    Read the article

  • How to enable OutputCache with an IHttpHandler

    - by Joseph Kingry
    I have an IHttpHandler that I would like to hook into the OutputCache support so I can offload cached data to the IIS kernel. I know MVC must do this somehow, I found this in OutputCacheAttribute: public override void OnResultExecuting(ResultExecutingContext filterContext) { if (filterContext == null) { throw new ArgumentNullException("filterContext"); } // we need to call ProcessRequest() since there's no other way to set the Page.Response intrinsic OutputCachedPage page = new OutputCachedPage(_cacheSettings); page.ProcessRequest(HttpContext.Current); } private sealed class OutputCachedPage : Page { private OutputCacheParameters _cacheSettings; public OutputCachedPage(OutputCacheParameters cacheSettings) { // Tracing requires Page IDs to be unique. ID = Guid.NewGuid().ToString(); _cacheSettings = cacheSettings; } protected override void FrameworkInitialize() { // when you put the <%@ OutputCache %> directive on a page, the generated code calls InitOutputCache() from here base.FrameworkInitialize(); InitOutputCache(_cacheSettings); } } But not sure how to apply this to an IHttpHandler. Tried something like this, but of course this doesn't work: public class CacheTest : IHttpHandler { public void ProcessRequest(HttpContext context) { OutputCacheParameters p = new OutputCacheParameters { Duration = 3600, Enabled = true, VaryByParam = "none", Location = OutputCacheLocation.Server }; OutputCachedPage page = new OutputCachedPage(p); page.ProcessRequest(context); context.Response.ContentType = "text/plain"; context.Response.Write(DateTime.Now.ToString()); context.Response.End(); } public bool IsReusable { get { return true; } } }

    Read the article

  • Data Transfer Objects VS Domain/ActiveRecord Entities in the View in RoR

    - by leypascua
    I'm coming from a .NET background, where it is a practice to not bind domain/entity models directly to the view in not-so-basic CRUD-ish applications where the view does not directly project entity fields as-is. I'm wondering what's the practice in RoR, where the default persistence mechanism is ActiveRecord. I would assert that presentation-related info should not be leaked to the entities, not sure though if this is how real RoR heads would do it. If DTOs/model per view is the approach, how will you do it in Rails? Your thoughts? EDIT: Some examples: - A view shows a list of invoices, with the number of unique items in one column. - A list of credit card accounts, where possibly fraudulent transactions were executed. For that, the UI needs to show this row in red. For both scenarios, The lists don't show all of the fields of the entities, just a few to show in the list (like invoice #, transaction date, name of the account, the amount of the transaction) For the invoice example, The invoice entity doesn't have a field "No. of line items" mapped on it. The database has not been denormalized for perf reasons and it will be computed during query time using aggregate functions. For the credit card accounts example, surely the card transaction entity doesn't have a "Show-in-red" or "IsFraudulent" invariant. Yes it may be a business rule, but for this example, that is a presentation concern, so I would like to keep it out of my domain model.

    Read the article

  • PHP Simple DOM Parser

    - by Junior Coder
    Hi guys I'm using this wonderful class here to do a bit of code embed filtering: http://simplehtmldom.sourceforge.net/. It extends the PHP DOM document class. Pretty much what I am doing is parsing a string through this class that contains embed code, i grab the unique bits of information eg id, width, height send through a handler function which inserts the id, width, height etc into my predefined "safe" template and reinsert my safe template in the place of the embed code the user has put in. May seem a backward way of doing it but it's the way it has to be done :) All of that works fine. Problem is when there is more than just embed code contained in the string, as I can't just replace the embed code i can only replace the entire string which wipes the rest of the tags etc string. For example if there were a p tag that would be wiped. So my question is how using this class can i just replace the certain part of the string? Spent the last few days trying to work this out and need some more input. It appears the class can do this so i'm stumped. Here's a basic version of what i have so far :) // load the class $html = new simple_html_dom(); // load the entire string containing everything user entered here $return = $html->load($string); // check for embed tags if($html->find('embed') == true { foreach($html->find('embed') as $element) { // send it off to the function which returns a new safe embed code $element = create_new_embed($parameters); // this is where i somehow i need to save the changes and send it back to $return } } Any input would be gratefully appreciated. If i have explained my problem well enough please let me know :)

    Read the article

  • form with multiple upload but allow no upload on edit problems

    - by minus4
    hiya i have a section that when created takes in images, however when you edit this item i dont want them to re-upload none changes images just to change a description or name. i have created this that deals with uploading files: public void UploadFiles(string currentFileName, FormCollection form) { // loop through all files in form post foreach (string file in Request.Files) { HttpPostedFileBase hpf = Request.Files[file]; // if no file is uploaded, we could be editing so set to current value if (hpf.ContentLength == 0) { form[file] = currentFileName; } else { //rename the file unique so we dont clash with names var filename = hpf.FileName.Replace(" ", "_").Replace(".", DateTime.Now.Date.Ticks + "."); UploadFileName = filename; hpf.SaveAs(Server.MapPath("~/Content/custom/" + filename)); // set the name of the file in our post to the new name form[file] = UploadFileName; } } // ensure value is still sent when no files are uploaded on edit if(Request.Files.Count <= 0) { UploadFileName = currentFileName; } } all works fine when only one image is required (CurrentFileName), however there is now a new image available taking it to a total of 2 images in the database therefor currentFileName is obsolete. has anyone tackled this and how as i have hit a wall with this one. thought of string[] currentFiles but cant see how to match this into string file in Request.Files. if it helps i am also working with models for the form so i could pass over the model but i dont think your able to do model.file without some kind of reflection. help much appreciated. thanks

    Read the article

  • .net MVC RenderPartial renders information that is not in the model

    - by Andreas
    Hi, I have a usercontrol that is rendering a list of items. Each row contains a unique id in a hidden field, a text and a delete button. When clicking on the delete button I use jquery ajax to call the controller method DeleteCA (seen below). DeleteCA returns a new list of items that replaces the old list. [HttpPost] public PartialViewResult DeleteCA(CAsViewModel CAs, Guid CAIdToDelete) { int indexToRemove = CAs.CAList.IndexOf(CAs.CAList.Single(m => m.Id == CAIdToDelete)); CAs.CAList.RemoveAt(indexToRemove); return PartialView("EditorTemplates/CAs", CAs); } I have checked that DeleteCA is really removing the correct item. The modified list of CAs passed to PartialView no longer contains the deleted item. Something weird happens when the partial view is rendered. The number of items in the list is reduced but it is always the last element that is removed from the list. The rendered items does not correspond to the items in the list/model sent to PartialView. In the usercontrol file (ascx) I'm using both Model.CAList and lambda expression m = m.CAList. How is it possible for the usercontrol to render stuff that is not in the model sent to PartialView? Thanx Andreas

    Read the article

  • Keys and terminology

    - by nabbed
    I have a predicament related to terminology. Our system processes events. Events are dispatched to a node based on the value of some field (or set of fields). We call this set of fields the key. We call the value of that set of fields the key value. What adds confusion is that each event is essentially a bag of key-value pairs (i.e., a hash map). So the word key is used for two different purposes: 1) to describe the set of fields on which the event is dispatched, and 2) as a field name. So if you had a collection of key-value pairs, and a set of those key-value pairs made up a database-style key, what terminology would you use to distinguish those two? (One further complication is that the key on which the event is dispatched is not always unique. For instance, if we dispatch on userid, and that user performs multiple actions, we will process multiple events with the same userid value. So maybe key is the wrong word to describe the set of fields on which we dispatch an event).

    Read the article

  • What is the Best way to databind an ASP.NET TreeView for table with many to many parent child relati

    - by Matt W
    I've got a table which has the usual ParentID, ChildID as it's first two columns in a self-referencing tree data structure. My issue is that when I pull this out and use the following code: DataSet set = DA.GetNewCategories(); set.Relations.Add( new DataRelation("parentChildCategories", set.Tables[0].Columns["CategoryParentID"], set.Tables[0].Columns["CategoryID"]) ); StringBuilder buildXml = new StringBuilder(); StringWriter writer = new StringWriter(buildXml); set.WriteXml(writer); TreeView2.DataSource = new HierarchicalDataSet(set, "CategoryID", "CategoryParentID"); TreeView2.DataBind(); I get the error: These columns don't currently have unique values I believe this is because my data has children with multiple parent nodes. This is fine for my application - I don't mind if one row of data is rendered in multiple nodes of my TreeView. Could someone shed light on this please? It doesn't seem unreasonable to have a DataSet render XML which has nodes appearing in multiple places, but I can't figure out how to do it. Thanks, Matt.

    Read the article

  • why DbCommandBuilder (Oracle) produces weird WHERE-clause to UpdateCommand in C# / ADO.NET 2.0?

    - by matti
    I have a table HolidayHome in oracle db which has unique db index on Id (I haven't specified this in the code in any way for adapter/table/dataset, don't know if i should/can). DbDataAdapter.SelectCommand is like this: SELECT Id, ExtId, Label, Location1, Location2, Location3, Location4, ClassId, X, Y, UseType FROM HolidayHome but UpdateCommand generated by DbCommandBuilder has very weird where clause: UPDATE HOLIDAYHOME SET ID = :p1, EXTID = :p2, LABEL = :p3, LOCATION1 = :p4, LOCATION2 = :p5, LOCATION3 = :p6, LOCATION4 = :p7, CLASSID = :p8, X = :p9, Y = :p10, USETYPE = :p11 WHERE ((ID = :p12) AND ((:p13 = 1 AND EXTID IS NULL) OR (EXTID = :p14)) AND ((:p15 = 1 AND LABEL IS NULL) OR (LABEL = :p16)) AND ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) AND ((:p19 = 1 AND LOCATION2 IS NULL) OR (LOCATION2 = :p20)) AND ((:p21 = 1 AND LOCATION3 IS NULL) OR (LOCATION3 = :p22)) AND ((:p23 = 1 AND LOCATION4 IS NULL) OR (LOCATION4 = :p24)) AND (CLASSID = :p25) AND (X = :p26) AND (Y = :p27) AND (USETYPE = :p28)) the code is like this: static bool CreateInsertUpdateDeleteCmds(DbDataAdapter dataAdapter) { DbCommandBuilder builder = _trgtProvFactory.CreateCommandBuilder(); builder.DataAdapter = dataAdapter; // Get the insert, update and delete commands. dataAdapter.InsertCommand = builder.GetInsertCommand(); dataAdapter.UpdateCommand = builder.GetUpdateCommand(); dataAdapter.DeleteCommand = builder.GetDeleteCommand(); } what to do? The UpdateCommand is utter madness. Thanks & Best Regards: Matti

    Read the article

  • Which combining css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are a lot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Preferably no byte translations required. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician. Just need a decent basic algorithm, we're not launching nukes... NB: Please no philosophy on encryption nor who is Diffie-Hellman. I just need a basic solution.

    Read the article

  • RESTful API design question - how should one allow users to create new resource instances?

    - by Tamás
    I'm working in a research group where we intend to publish implementations of some of the algorithms we develop on the web via a RESTful API. Most of these algorithms work on small to medium size datasets, and in many cases, a user of our services might want to run multiple queries (with different parameters) on the same dataset, so for me it seems reasonable to allow users to upload their datasets in advance and refer to them in their queries later. In this sense, a dataset could be a resource in my API, and an algorithm could be another. My question is: how should I let the users upload their own datasets? I cannot simply let users upload their data to /dataset/dataset_id as letting the users invent their own dataset_ids might result in ID collision and users overwriting each other's datasets by accident. (I believe one of the most frequently used dataset ID would be test). I think an ideal way would be to have a dedicated URL (like /dataset/upload) where users can POST their datasets and the response would contain a unique ID under which the dataset was stored, but I'm not sure that it does not violate the basic principles of REST. What is the preferred way of dealing with such scenarios?

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Using jQuery delay() with separate elements

    - by boomturn
    I want to fake a simple animation on page load by sequentially revealing several pngs, each appearing instantly but with unique, precisely timed delay before the next one. Using the latest version of jQuery (1.4.2 at writing), which provides a delay method. My first (braindead) impulse was: $('#image1').show('fast').delay(800); $('#image2').show('fast').delay(1200); $('#image3').show('fast'); Of course all come up simultaneously. jQuery doc examples refer to a single item being chained, not handling multiple items. OK... try again using nested callbacks from show(): $('#image1').show('fast', function() { $('#image2').show('fast', function() { $('#image3').show('fast', function() { }); }); }); Looks ugly (imagine doing 10, also wonder what processor hit would be) but hey, it works sequentially! Without delay though. So... chain delay() after show() using a callback function from it (not even sure if it supports one): $('#image1').show('fast').delay(800, function() { $('#image2').show('fast').delay(1200, function() { $('#image3').show('fast'); }); }); Hm. Opposite result of what I thought might happen: the callback works, but still no delay. So... should I be using some combination of queue and delay instead of nested callbacks? jQuery doc examples for queue again only use one element for example. Guessing this would involve using a custom queue, but am uncertain how to implement this. Or maybe I'm missing a simple way? (For a simple problem!)

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • Basic site analytics doesn't tally with Google data

    - by Jenkz
    After being stumped by an earlier quesiton: SO google-analytics-domain-data-without-filtering I've been experimenting with a very basic analytics system of my own. MySQL table: hit_id, subsite_id, timestamp, ip, url The subsite_id let's me drill down to a folder (as explained in the previous question). I can now get the following metrics: Page Views - Grouped by subsite_id and date Unique Page Views - Grouped by subsite_id, date, url, IP (not nesecarily how Google does it!) The usual "most visited page", "likely time to visit" etc etc. I've now compared my data to that in Google Analytics and found that Google has lower values each metric. Ie, my own setup is counting more hits than Google. So I've started discounting IP's from various web crawlers, Google, Yahoo & Dotbot so far. Short Questions: Is it worth me collating a list of all major crawlers to discount, is any list likely to change regularly? Are there any other obvious filters that Google will be applying to GA data? What other data would you collect that might be of use further down the line? What variables does Google use to work out entrance search keywords to a site? The data is only going to used internally for our own "subsite ranking system", but I would like to show my users some basic data (page views, most popular pages etc) for their reference.

    Read the article

  • How to configure for multiple gettext domains with babel, pylons, setuptools

    - by ICanHaveSpam
    While trying to internationalize my pylons web and mobile application, 'myapp', Im finding that I would like to keep separate gettext pot files for separate domains. There will be common msgid values for both web and mobile users and there will also be unique msgid values that are only translated for web or mobile users. Im expecting localized msgstrs for mobile users will be different (more terse) than the localized msgstrs for normal web users. The environment is like: the same myapp/controllers will be used for both mobile and web requests. mobile users will have their pages rendered from myapp/templates/mobile normal web users will have their pages rendered from myapp/templates/web What happens by default: I end up with myapp/i18n/myapp.pot and myapp/i18n/*/LC_MESSAGES/myapp.[pm]o files that contain msgid values from controllers and both sets of templates. What Im looking for: to set the gettext domain for the user's session when I decide which templates will render their responses. myapp's msgids from controllers and web template extract into myapp/i18n/web.pot myapp's msgids from controllers and mobile templates extract into myapp/i18n/mobile.pot babel's init_catalog, update_catalog, and compile_catalog runs deal with these separate domains and create separate po and mo localization files. Where Im lost: configuring myapp's setup.cfg and setup.py to deal with separate gettext domains so that I can direct extracted msgid values into a particular pot file based on the path of the python and template files.

    Read the article

  • Lotus Notes - Export emails to plain text file

    - by mbeckish
    I am setting up a Lotus Notes account to accept emails from a client, and automatically save each email as a plain text file to be processed by another application. So, I'm trying to create my very first Agent in Lotus to automatically export the emails to text. Is there a standard, best practices way to do this? I've created a LotusScript Agent that pretty much works. However, there is a bug - once the Body of the memo exceeds 32K characters, it starts inserting extra CR/LF pairs. I am using Lotus Notes 7.0.3. Here is my script: Sub Initialize On Error Goto ErrorCleanup Dim session As New NotesSession Dim db As NotesDatabase Dim doc As NotesDocument Dim uniqueID As Variant Dim curView As NotesView Dim docCount As Integer Dim notesInputFolder As String Dim notesValidOutputFolder As String Dim notesErrorOutputFolder As String Dim outputFolder As String Dim fileNum As Integer Dim bodyRichText As NotesRichTextItem Dim bodyUnformattedText As String Dim subjectText As NotesItem ''''''''''''''''''''''''''''''''''''''''''''''''''''''' 'INPUT OUTPUT LOCATIONS outputFolder = "\\PASCRIA\CignaDFS\CUser1\Home\mikebec\MyDocuments\" notesInputFolder = "IBEmails" notesValidOutputFolder = "IBEmailsDone" notesErrorOutputFolder="IBEmailsError" ''''''''''''''''''''''''''''''''''''''''''''''''''''''' Set db = session.CurrentDatabase Set curview = db.GetView(notesInputFolder ) docCount = curview.EntryCount Print "NUMBER OF DOCS " & docCount fileNum = 1 While (docCount > 0) 'set current doc to Set doc = curview.GetNthDocument(docCount) Set bodyRichText = doc.GetFirstItem( "Body" ) bodyUnformattedText = bodyRichText.GetUnformattedText() Set subjectText = doc.GetFirstItem("Subject") If subjectText.Text = "LotusAgentTest" Then uniqueID = Evaluate("@Unique") Open "\\PASCRIA\CignaDFS\CUser1\Home\mikebec\MyDocuments\email_" & uniqueID(0) & ".txt" For Output As fileNum Print #fileNum, "Subject:" & subjectText.Text Print #fileNum, "Date:" & Now Print #fileNum, bodyUnformattedText Close fileNum fileNum = fileNum + 1 Call doc.PutInFolder(notesValidOutputFolder) Call doc.RemoveFromFolder(notesInputFolder) End If doccount = doccount-1 Wend Exit Sub ErrorCleanup: Call sendErrorEmail(db,doc.GetItemValue("From")(0)) Call doc.PutInFolder(notesErrorOutputFolder) Call doc.RemoveFromFolder(notesInputFolder) End Sub Update Apparently the 32KB issue isn't consistent - so far, it's just one document that starts getting extra carriage returns after 32K.

    Read the article

  • Silverlight: how to use a scroll viewer to wrap a list view without specifying height?

    - by John Nicholas
    I have a control that has a list that varies in length greatly. This control appears in various places meaning that i cannot calculate its position and desired height easily. Moreover all I want is for the scrollviewer to simply size itself according to its parent. currently it insists on sizing itself according to the content. currently when i have a list that exceeds the height of the screen the whole control extends off the bottom and the scrollviewer shows no bar (because it has stretched to the heigth of the contents and so thinks it is not required). I've not included code as the object graph is fairly deep. What i am looking for is a set of conditions that would cause the scrollviewer to resize itself according to its content rather than its parent. I have it working in a similar situation involving grids and datagrids, the unique part of this control is that there is a list containing controls. Any ideas? I would prefer solutions that don't require use of code behind - but im really not in a position to be choosey.

    Read the article

  • Create Keyword Object Perl Microsoft::AdCenter

    - by toobsco42
    So I looked at the perldoc for the Microsoft::AdCenter module and it shows this as an example of how to create a keyword object: ~$ perldoc Microsoft::AdCenter #Create a Keyword object my $keyword = Microsoft::AdCenter::V7::CampaignManagementService::Keyword->new ->Text("some text") ->BroadMatchBid(Microsoft::AdCenter::V7::CampaignManagementService::Bid->new->Amount(0.1)) ->ExactMatchBid(Microsoft::AdCenter::V7::CampaignManagementService::Bid->new->Amount(0.1)); However, doesn't this violate the new policy of using only one match type per keyword? Campaign Management changes: "Previously, you would create a single Keyword object and specify a bid value for each match that you wanted to bid on (for example, exact match or phrase match). If you did not specify a bid value at the keyword-level, adCenter used the default bid value specified at the ad group level. Now, you must create a Keyword object for each match type that you want to bid on. For example, to bid on the keyword car by using exact match and phrase match, create a Keyword object and set the Text element to car and the ExactMatchBid element to a bid amount. Then, create a second Keyword object and set the Text element to car and PhraseMatchBid to a bid amount. When you add the keywords, you’ll get a unique keyword ID for each keyword and match-type combination."

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >