Search Results

Search found 1934 results on 78 pages for 'areas'.

Page 69/78 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Can't cast treeviewitem as treeviewitem in wpf

    - by phenevo
    Hi, I've got webservice asmx, and there are classes: Country public string Name {get;set;} public string Code {get;set;} public List<Area> Areas {get;set;} Area public string Name {get;set;} public string Code {get;set;} public List<Regions> Provinces {get;set;} Provinces public string Name {get;set;} public string Code {get;set;} I bind it to mz TreeView WPF: Country[] items = new MyService().GetListOfCountries(); structureTree.ItemsSource = items; Code of myTree: <UserControl x:Class="ObjectsAndZonesSimpleTree" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" <Grid> <StackPanel Name="stackPanel1"> <GroupBox Header="Choose" Height="354" Name="groupBox1" Width="Auto"> <TreeView Name="structureTree" SelectedItemChanged="structureTree_SelectedItemChanged" Grid.Row="0" Grid.Column="0" ItemsSource="{Binding}" Height="334" ScrollViewer.VerticalScrollBarVisibility="Visible" ScrollViewer.HorizontalScrollBarVisibility="Visible" Width="Auto" PreviewMouseRightButtonUp="structureTree_PreviewMouseRightButtonUp" FontFamily="Verdana" FontSize="12" BorderThickness="1" MinHeight="0" Padding="1" Cursor="Hand" Margin="-1"> <TreeView.Resources> <HierarchicalDataTemplate DataType="{x:Type MyService:Country}" ItemsSource="{Binding Path=ListOfRegions}"> <StackPanel Orientation="Horizontal"> <TextBlock TextAlignment="Justify" VerticalAlignment="Center" Text="{Binding Path=Name}"/> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type MyService:Region}" ItemsSource="{Binding Path=Provinces}"> <StackPanel Orientation="Horizontal"> <TextBlock TextAlignment="Justify" VerticalAlignment="Center" Text="{Binding Path=Name}"/> </StackPanel> </HierarchicalDataTemplate> <DataTemplate DataType="{x:Type MyService:Province}" ItemsSource="{Binding Path=ListOfCities}"> <StackPanel Orientation="Horizontal"> <TextBlock TextAlignment="Justify" VerticalAlignment="Center" Text="{Binding Path=Name}"/> </StackPanel> </DataTemplate> </TreeView.Resources> </TreeView> </GroupBox> </StackPanel> </Grid> </UserControl> This gives me null: private void structureTree_SelectedItemChanged(object sender, RoutedPropertyChangedEventArgs<object> e) { TreeViewItem treeViewItem = structureTree.SelectedItem as TreeViewItem; }

    Read the article

  • Problem: adding feature in MOSS 2007

    - by Anoop
    Hi All, I have added a menu item in 'Actions' menu of a document library as follows(Using features: explained in MSDN How to: Add Actions to the User Interface: http://msdn.microsoft.com/en-us/library/ms473643.aspx): feature.xml as follows: <?xml version="1.0" encoding="utf-8" ?> <Feature Id="51DEF381-F853-4991-B8DC-EBFB485BACDA" Title="Import From REP" Description="This example shows how you can customize various areas inside Windows SharePoint Services." Version="1.0.0.0" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="ImportFromREP.xml" /> </ElementManifests> </Feature> ImportFromREP.xml is as follows: <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <!-- Document Library Toolbar Actions Menu Dropdown --> <CustomAction Id="ImportFromREP" RegistrationType="List" RegistrationId = "101" GroupId="ActionsMenu" Rights="AddListItems" Location="Microsoft.SharePoint.StandardMenu" Sequence="1000" Title="Import From REP"> <UrlAction Url="/_layouts/ImportFromREP.aspx?ActionsMenu"/> </CustomAction> </Elements> I have successfully installed and activated the feature. Normal case: Now if i login with the user having atleast 'Contribute' permission on the document library, i am able to see the menu item 'Import From REP' (in 'Actions' menu)in root folder as well as in all subfolders. Problem case: If user is having view rights on the document library but add/delete rights on a perticular subfolder(say 'subfolder') inside the document library: Menu item 'Import From REP' is not visible on root folder level as well as 'Upload' menu is also not visible. which is o.k. because user does not have AddListItems right on root folder but it is also not visible in 'subfolder' while 'Upload' menu is visible as user has add/delete rights on the 'subfolder'. did i mention a wrong value of attribute Rights(Rights="AddListItems") in the xml file?? If so, what should be the correct value? What should i do to simulate the behaviour of 'Upload' menu??(i.e. when 'Upload' menu is visible, 'Import From REP' menu is visible and when 'Upload' menu is not visible 'Import From REP' menu is also not visible. ) Thanks in advance Anoop

    Read the article

  • DynamicMethod for ConstructorInfo.Invoke, what do I need to consider?

    - by Lasse V. Karlsen
    My question is this: If I'm going to build a DynamicMethod object, corresponding to a ConstructorInfo.Invoke call, what types of IL do I need to implement in order to cope with all (or most) types of arguments, when I can guarantee that the right type and number of arguments is going to be passed in before I make the call? Background I am on my 3rd iteration of my IoC container, and currently doing some profiling to figure out if there are any areas where I can easily shave off large amounts of time being used. One thing I noticed is that when resolving to a concrete type, ultimately I end up with a constructor being called, using ConstructorInfo.Invoke, passing in an array of arguments that I've worked out. What I noticed is that the invoke method has quite a bit of overhead, and I'm wondering if most of this is just different implementations of the same checks I do. For instance, due to the constructor matching code I have, to find a matching constructor for the predefined parameter names, types, and values that I have passed in, there's no way this particular invoke call will not end up with something it should be able to cope with, like the correct number of arguments, in the right order, of the right type, and with appropriate values. When doing a profiling session containing a million calls to my resolve method, and then replacing it with a DynamicMethod implementation that mimics the Invoke call, the profiling timings was like this: ConstructorInfo.Invoke: 1973ms DynamicMethod: 93ms This accounts for around 20% of the total runtime of this profiling application. In other words, by replacing the ConstructorInfo.Invoke call with a DynamicMethod that does the same, I am able to shave off 20% runtime when dealing with basic factory-scoped services (ie. all resolution calls end up with a constructor call). I think this is fairly substantial, and warrants a closer look at how much work it would be to build a stable DynamicMethod generator for constructors in this context. So, the dynamic method would take in an object array, and return the constructed object, and I already know the ConstructorInfo object in question. Therefore, it looks like the dynamic method would be made up of the following IL: l001: ldarg.0 ; the object array containing the arguments l002: ldc.i4.0 ; the index of the first argument l003: ldelem.ref ; get the value of the first argument l004: castclass T ; cast to the right type of argument (only if not "Object") (repeat l001-l004 for all parameters, l004 only for non-Object types, varying l002 constant from 0 and up for each index) l005: newobj ci ; call the constructor l006: ret Is there anything else I need to consider? Note that I'm aware that creating dynamic methods will probably not be available when running the application in "reduced access mode" (sometimes the brain just won't give up those terms), but in that case I can easily detect that and just calling the original constructor as before, with the overhead and all.

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Gateway Page Between ASP and an ASP.NET Page

    - by ajdams
    I'll admit, I am pretty new with ASP .NET programming and I have been asked to take all our gateway pages (written in classic ASP) and make one universal gateway page to the few C# .NET applications we have (that I wrote). I tried searching here and the web and couldn't find much of anything describing a great way to do this and figured I was either not searching properly or was not properly naming what I am trying to do. I decided to to take one of the main gateway pages we had in classic ASP and use that as a base for my new gateway. Without boring you with a bunch of code I will summarize my gateway in steps and then can take advice/critique from there. EDIT: Basically what I am trying to do is go from a classic ASP page to a ASP .NET page and then back again. EDIT2: If my question is still unclear I am asking if what I have an the right start and if anyone has suggestions as to how this could be better. It can be as generic as need-be, not looking for a specific off-the-shelf code answer. My Gateway page: In the first part of the page I grab session variables and determine if the user is leaving or returning through the gateway: Code (in VB): uid = Request.QueryString("GUID") If uid = "" Then direction = "Leaving" End If ' Gather available user information. userid = Session("lnglrnregid") bankid = Session("strBankid") ' Return location. floor = Request.QueryString("Floor") ' The option chosen will determine if the user continues as SSL or not. ' If they are currently SSL, they should remain if chosen. option1 = Application(bankid & "Option1") If MID(option1, 6, 1) = "1" Then sslHttps = "s" End If Next I enter the uid into a database table (SQL-Server 2005) as a uniqueidentifier field called GUID. I omitted the stored procedure call. Lastly, I use the direction variable to determine if the user is leaving or returning and do redirects from there to the different areas of the site. Code (In VB again): If direction = "Leaving" Then Select Case floor Case "sscat", "ssassign" ' A SkillSoft course Response.Redirect("Some site here") Case "lrcat", "lrassign" ' A LawRoom course Response.Redirect("Some site here") Case Else ' Some other SCORM course like MindLeaders or a custom upload. Response.Redirect("Some site here") End Select Session.Abandon Else ' The only other direction is "Returning" ..... That's about it so far - so like I said, not an expert so any suggestions would be greatly appreciated!

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • Mixing application modules between Silverlight and ASP.NET

    - by jkohlhepp
    Background: I work in a suite of ASP.NET applications that have several different "modules". The applications all share a main menu, so they all link to one-another. The modules are the high-level areas of the application. So, for example, it might be Payments, Orders, Customers, Products, etc. And Payments and Orders are in one app and Products and Customers are in another. Some of these menu links are "deep links", for example it might be a link to a particular page within the Customers module, such as Create New Customer. The issue: We are about to start a project that will add several more modules to this suite, probably as a new .NET application. I'm thinking about doing these new modules in Silverlight (for various reasons that are not material to the question). If I were to do that, I need to make the menu look the same as the menu in ASP.NET, as the users still need to feel like they are inside one "application". My questions: How should I organize the Silverlight project(s) so that I can "deep link" from ASP.NET pages into particular modules in the Silverlight app? What is even the best idea for creating these different Silverlight "modules"? If I had something that would've been a page in ASP.NET (for example - Create Customer), should each one of those be a separate Silverlight app? Or should it be a separate User Control? Or something else? Should I reuse our shared ASP.NET menu, and deep link to different Silverlight "modules" even within the new application? Or should I reimplement the menu in Silverlight for navigation within the app? Are there menu controls for Silverlight that look similar to ASP.NET menus (with flyout submenus in this case)? Could I maybe even share a SiteMap XML file between them? Edit: After looking around a bit more, it seems like PRISM might be the answer for some of my issues. It would allow me to modularize the different chunks of Silverlight that I have. And it would allow me to define a "master page" in Silverlight where I could host the menu. Do I have this right?

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • What am I missing about WCF?

    - by Bigtoe
    I've been developing in MS technologies for longer than I care to remember at this stage. When .NET arrived on the scene I thought they hit the nail on the head and with each iteration and version I thought their technologies were getting stronger and stronger and looked forward to each release. However, having had to work with WCF for the last year I must say I found the technology very difficult to work with and understand. Initially it's quite appealing but when you start getting into the guts of it, configuration is a nightmare, having to override behaviours for message sizes, number of objects contained in a messages, the complexity of the security model, disposing of proxies when faulted and finally moving back to defining interfaces in code rather than in XML. It just does not work out of the box and I think it should. We found all of the above issues while either testing ourselves or else when our products were out on site. I do understand the rationale behind it all, but surely they could have come up with simpler implementation mechanism. I suppose what I'm asking is, Am I looking at WCF the wrong way? What strengths does it have over the alternatives? Under what circumstances should I choose to use WCF? OK Folks, Sorry about the delay in responding, work does have a nasty habbit of get in the way somethimes :) Some clarifications My main paint point with WCF I suppose falls down into the following areas While it does work out of the box, your left with some major surprises under the hood. As pointed out above basic things are restricted until they are overridden Size of string than can be passed can't be over 8K Number of objects that can be passed in a single message is restricted Proxies not automatically recovering from failures The amount of configuration while it's there is a good thing, but understanding it all and what to use what and under which circumstances can be difficult to understand. Especially when deploying software on site with different security requirements etc. When talking about configuration, we've had to hide lots of ours in a back-end database because security and network people on-site were trying to change things in configuration files without understanding it. Keeping the configuration of the interfaces in code rather than moving to explicitly defined interfaces in XML, which can be published and consumed by almost anything. I know we can export the XML from the assembley, but it's full of rubbish and certain code generators choke on it. I know the world moves on, I've moved on a number of times over the last (ahem 22 years I've been developing) and am actively using WCF, so don't get me wrong, I do understand what it's for and where it's heading. I just think there should be simplier configuration/deployment options available, easier set-up and better management for configuration (SQL config provider maybe, rahter than just the web.config/app.config files). OK, back to the daily grid. Thanks for all your replies so far. Kind Regards Noel

    Read the article

  • do you want try ? [closed]

    - by gemxia
    Only with time and hard work, that can you get an IT certification. Although there are hundreds of certifications for you to pick from, the basic steps to get certified are the same. The following steps are certain to clear your puzzles about the preparation process of your <. The first step to take is choosing a certification. It is simple but at the same time very important. Make sure to choose the certifications that are respected in your industries. The second step you should take is to evaluate your experience. Find out what skills and experience the IBM certification is expecting. Then, decide what type of training is suitable for you. Preparation books will certainly not make you an expert in subjects you’re not already an expert in. But, for the subject areas you know little or nothing about, a study guide provides you clues and guidance about what the important information from those subjects is when it comes to passing the Examkiller IBM examination exam. Visit certification forums during your 000-M62 certification exam preparation. In this way, you can learn from others’ mistakes and example, meanwhile help your own studies. Achieving your goals without proper training is a sure road to failure. Knowing about a topic and having special expertise in it are completely different. One cannot be an expert in the IT industry without the proper foundation. Taking a training class for Examkiller IBM exam might be a guaranteed way. When the economy dips and budgets get tightened, one of the first things to go from corporate spending is training. There are plenty of courses, boot camps and cram sessions that promise to prepare you for the IBM exam, but they are exceptionally expensive. As much as possible, for your own benefit, you should look for resources that are free. Vendor of IBM offers free resource in their sites. These practice exams are the closest to the real exams. If you think that you have got ready for the exam, you can take the fourth now, which is registering your exam. Even if you have passed your <, yet you can’t relax, since there are still so many certifications ahead. If you have just memorized some questions and answers, excepting a fluke, then, don’t take the IBM test exam, until you really have the experience and skills the certification requires.

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • Soon to be PhD in Computer Science - Which Path to Follow?

    - by mttr
    I am going to submit my PhD thesis within the next six months. My PhD is on managing the availabiity of large-scale distributed systems, so I have some experience actually building non-trivial systems (+ I have four years experience working as a programmer). I am now trying to figure out what I should do following the PhD. I enjoy research (a quick definition: identify problem, come up with solution, ask interesting questions, find ways to answer them, build system, experiment, contribute some new knowledge and publish). I also like teaching and supervising students. It would seem that a career in academia is the ideal thing to do (can work on non-trivial problems and contribute something of use to some or more people). However, a career in academia has two significant drawbacks. First, it can be difficult to gain access to real systems with real users which then display real problems. This creates the danger that you do work that seems important (to you and maybe to some of your colleagues), but is not really relevant to anything or anyone. Second, the pay is pretty sad. Apparently, you have to sacrifice this for the privilege of doing research. I enjoy programming, but don't just want to hack some web-based system for the rest of my life. That is, working in IT for a bank is not a future I see myself enjoying. I want to work on interesting problms (that's difficult to define clearly): things where you don't know how to start, that take some time to figure out and attack, that require a rigorous approach to demonstrate that the problem has been solved, and problems that need a solution in the real world. Give the experience of people on stackoverflow, what do you think suitable options are and why (or alternatively, what gaps in my thinking does the above reveal)? Is industrial research (aka IBM Research, Microsoft Research) the only alternative avenue to a career in academia? What other areas, companies, occupations, etc. could provide me with stimulating, inspiring work? Which regions, countries am I most likely to find such work? Please share your experience.

    Read the article

  • Using Effect For Fog of War

    - by Qua
    I'm trying to apply fog of war to areas on the screen not currently visible to the player. I do this by rendering the game content in one RenderTarget and the the fog of war into another, and then I merge them with an effect file that takes the color from the game RenderTarget and the alpha from the fog of war render target. The FOW RenderTarget is black where the FOW appears, and white where it doesn't. This does work, but it colors the fog of war (the unrevealed locations) white instead of the intended color of black. Before applying the effect I clear the backbuffer of the device to white. When I try to clear it to black, non of the fog of war appears at all, which I assume is a product of alpha blending with black. It works for all other colors, however - giving the resulting screen a tint of that color. How do I archieve a black fog while still being able to do alpha blending between the two render targets? The rendering code for applying the FOW: private RenderTarget2D mainTarget; private RenderTarget2D lightTarget; private void CombineRenderTargetsAndDraw() { batch.GraphicsDevice.SetRenderTarget(null); batch.GraphicsDevice.Clear(Color.White); fogOfWar.Parameters["LightsTexture"].SetValue(lightTarget); batch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend); fogOfWar.CurrentTechnique.Passes[0].Apply(); batch.Draw( mainTarget, new Rectangle(0, 0, batch.GraphicsDevice.PresentationParameters.BackBufferWidth, batch.GraphicsDevice.PresentationParameters.BackBufferHeight), Color.White ); batch.End(); } The effect file I'm using to apply the FOW: texture LightsTexture; sampler ColorSampler : register(s0); sampler LightsSampler = sampler_state{ Texture = <LightsTexture>; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 TexCoord : TEXCOORD0; }; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 tex = input.TexCoord; float4 color = tex2D(ColorSampler, tex); float4 alpha = tex2D(LightsSampler, tex); return float4(color.r, color.g, color.b, alpha.r); } technique Technique1 { pass Pass1 { PixelShader = compile ps_2_0 PixelShaderFunction(); } }

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • When does IE7 recompute styles? Doesn't work reliably when a class is added to the body.

    - by Kid A
    I have an interesting problem here. I'm using a class on the element as a switch to drive a fair amount of layout behavior on my site. If the class is applied, certain things happen, and if the class isn't applied, they don't happen. The relevant CSS is roughly like this: .rightSide { display:none; } .showCommentsRight .rightSide { display:block; width:50%; } .showCommentsRight .leftSide { display:block; width:50%; } And the HTML: <body class="showCommentsRight"> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> </body> I've simplified things but this is essentially the method. The whole page changes layout (hiding the right side in three different areas) when the flag is set on the body. This works in Firefox and IE8. It does not work in IE8 in compatibility mode. What is fascinating is that if you sit there and refresh the page, the results can vary. It will pick a different section's right side to show. Sometimes it will show only the top section's right side, sometimes it will show the middle. I have tried a validator (to look for malformed html), double css formatting, and making sure my IE7 hack sheet wasn't having an effect. So my question is: * Is there a way that this behavior can be made reliable? * When does IE7 decide to re-do styling? Thanks everyone.

    Read the article

  • How hard programming is? Really. [closed]

    - by Bubba88
    Hi! The question is about your perception of programming activity. How hard/exacting this task is? There is much buzz about programming nowadays, people say that programmers are smart, very technical and abstract at a time, know much about world, psychology etc.. They say, that programmers got really powerful brain thing, cause there is much to keep in consideration simultaneously again with much information folded into each other associatively (up 10 levels of folding they say))) Still, there are some terms to specify at our own.. So that is the question: What do you think about programming in general? Is it hard? Is it 'for everyone' or for the particular kind of people only? How much non-CS background do you need to program (just to program, really; enterprise applications for example)? How long is the learning curve? (again, for programming in general) And another bunch of random questions: - If you were not to like/love programming, would that be a serious trouble bothering your current employment? - If you were to start from the beginning, would you chose that direction this time? - What other areas (jobs or maybe hobbies) are comparable to programming in the way they can explode someone's lovely brain? - Is 'non turing-complete programming' (SQL, XML, etc.) comparable to what we do or is it really way easier, less requiring, cheap and akin to cooking :)? Well, the essence is: How would you describe programming activity WRT to its difficulty? Or, on the other hand: Did you ever catch yourself thinking at some point: OMG, it's sooo hard! I don't know how would I ever program, even carried away this way and doing programming just for fun? It's very interesting to know your opinion, your'e the programmers after all. I mean much people must be exaggerating/speculating about the thing they do not really know about. But that musn't be the case here on SO :) P.S.: I'll try my best to update this post later, and you please edit it too. At least I'll get decent English in my question text :)

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • How do we, as a community, help encourage programming in public schools? (Or state Schools for the U

    - by NoMoreZealots
    PRIMARY MOTIVATION My office gets involved with the "First Robotics" competitions and one thing that lingers year to year is the students typically have no preparation for doing even simple programming as part of the public schools system. While the science classes provide some basic grasp of mechanical and electrical concepts, by in large computer programming gets no coverage from the curriculum. (This my be different in other areas of the country/world.) What makes it worse is there is only a short period of time you have to prepare the student's and help them design the robot. Talking to some professors from local colleges, it's a problem because you can't assume even the most basic understanding for freshman CS majors. Languages like Python, Lua and BASIC are simple enough for at least high school level students, if not younger. SCOPE So how do you get public schools to support a programming, at least to the level of "Try it in BASIC" examples that used to be at the end of a chapter in my Algebra book? At least enough to prepare them for event's such as the FIRST Robotic competitions. Which the primary objectives are to teach problem solving and team work, and to possible foster an interest in Math, Science and Engineering in general. (Not force feed to them, as some people her seem to be implying.) Edit: Why teach kids: (Since 2000 CS enrollment in US colleges has decreased by 70% while college enrollment has increased, this is a PROBLEM.) Saying there is no value in teaching someone programming in Jr./High school because they might think "they know programming." Is like saying there's no value in teaching High school science and physics, because they might decide they "know physics." Leading to abuse like: "I passed a high school physics class, I'm going to develop a Unified Quantum Gravitational Theory." Better Prepared students are better students. Instead it would allows college programs to raise the bar on the entry level courses, allowing students to be weeded out based on their understanding of more advanced material. Plus people who did poorly in that in topic in High school aren't as likely to say "I think there's money in computer's so I'll computer science." Plus if people take it in high school and decide THEN that it's not for them, it's better than them wasting their money to PAY a college to figure that out. The result is that people who take the degree are more likely to succeed and be there for the RIGHT reasons. (i.e. It's what they REALLY want to do. And that's REALLY the key to being good at anything.) Programming is like anything else, the more practice and genuine interest you have the better you get. If you start them later, they get less practice. The earlier give them the opportunity to start, the more practice they will get. All other things equal, the more practice the better the programmer.

    Read the article

  • What Regex can strip e.g. "note:" and "firstName: " from the left of a string?

    - by Edward Tanguay
    I need to strip the "label" off the front of strings, e.g. note: this is a note needs to return: note and this is a note I've produced the following code example but am having trouble with the regexes. What code do I need in the two ???????? areas below so that I get the desired results shown in the comments? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Text.RegularExpressions; namespace TestRegex8822 { class Program { static void Main(string[] args) { List<string> lines = new List<string>(); lines.Add("note: this is a note"); lines.Add("test: just a test"); lines.Add("test:\t\t\tjust a test"); lines.Add("firstName: Jim"); //"firstName" IS a label because it does NOT contain a space lines.Add("She said this to him: follow me."); //this is NOT a label since there is a space before the colon lines.Add("description: this is the first description"); lines.Add("description:this is the second description"); //no space after colon lines.Add("this is a line with no label"); foreach (var line in lines) { Console.WriteLine(StringHelpers.GetLabelFromLine(line)); Console.WriteLine(StringHelpers.StripLabelFromLine(line)); Console.WriteLine("--"); //note //this is a note //-- //test //just a test //-- //test //just a test //-- //firstName //Jim //-- // //She said this to him: follow me. //-- //description //this is the first description //-- //description //this is the first description //-- // //this is a line with no label //-- } Console.ReadLine(); } } public static class StringHelpers { public static string GetLabelFromLine(this string line) { string label = line.GetMatch(@"^?:(\s)"); //??????????????? if (!label.IsNullOrEmpty()) return label; else return ""; } public static string StripLabelFromLine(this string line) { return ...//??????????????? } public static bool IsNullOrEmpty(this string line) { return String.IsNullOrEmpty(line); } } public static class RegexHelpers { public static string GetMatch(this string text, string regex) { Match match = Regex.Match(text, regex); if (match.Success) { string theMatch = match.Groups[0].Value; return theMatch; } else { return null; } } } }

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • Separate specific #ifdef branches

    - by detly
    In short: I want to generate two different source trees from the current one, based only on one preprocessor macro being defined and another being undefined, with no other changes to the source. If you are interested, here is my story... In the beginning, my code was clean. Then we made a new product, and yea, it was better. But the code saw only the same peripheral devices, so we could keep the same code. Well, almost. There was one little condition that needed to be changed, so I added: #if defined(PRODUCT_A) condition = checkCat(); #elif defined(PRODUCT_B) condition = checkCat() && checkHat(); #endif ...to one and only one source file. In the general all-source-files-include-this header file, I had: #if !(defined(PRODUCT_A)||defined(PRODUCT_B)) #error "Don't make me replace you with a small shell script. RTFM." #endif ...so that people couldn't compile it unless they explicitly defined a product type. All was well. Oh... except that modifications were made, components changed, and since the new hardware worked better we could significantly re-write the control systems. Now when I look upon the face of the code, there are more than 60 separate areas delineated by either: #ifdef PRODUCT_A ... #else ... #endif ...or the same, but for PRODUCT_B. Or even: #if defined(PRODUCT_A) ... #elif defined(PRODUCT_B) ... #endif And of course, sometimes sanity took a longer holiday and: #ifdef PRODUCT_A ... #endif #ifdef PRODUCT_B ... #endif These conditions wrap anywhere from one to two hundred lines (you'd think that the last one could be done by switching header files, but the function names need to be the same). This is insane. I would be better off maintaining two separate product-based branches in the source repo and porting any common changes. I realise this now. Is there something that can generate the two different source trees I need, based only on PRODUCT_A being defined and PRODUCT_B being undefined (and vice-versa), without touching anything else (ie. no header inclusion, no macro expansion, etc)?

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >