Search Results

Search found 958 results on 39 pages for 'limitations'.

Page 25/39 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Keybindings for individual letter keys (not modifier-combinations) on a GtkTextView widget (Gtk3 and PyGI)

    - by monotasker
    I've been able to set several keybord shortcuts for a GtkTextView and a GtkTextEntry using the new css provider system. I'm finding, though, that I only seem to be able to establish keybindings for combinations including a modifier key. The widget doesn't respond to any bindings I set up that use: the delete key the escape key individual letter or punctuation keys alone Here's the code where I set up the css provider for the keybindings: #set up style context keys = Gtk.CssProvider() keys.load_from_path(os.path.join(data_path, 'keybindings.css')) #set up style contexts and css providers widgets = {'window': self.window, 'vbox': self.vbox, 'toolbar': self.toolbar, 'search_entry': self.search_entry, 'paned': self.paned, 'notelist_treeview': self.notelist_treeview, 'notelist_window': self.notelist_window, 'notetext_window': self.notetext_window, 'editor': self.editor, 'statusbar': self.statusbar } for l, w in widgets.iteritems(): w.get_style_context().add_provider(keys, Gtk.STYLE_PROVIDER_PRIORITY_USER) Then in keybindings.css this is an example of what works: @binding-set gtk-vi-text-view { bind "<ctrl>b" { "move-cursor" (display-lines, -5, 0) }; /* 5 lines up */ bind "<ctrl>k" { "move-cursor" (display-lines, -1, 0) }; /* down */ bind "<ctrl>j" { "move-cursor" (display-lines, 1, 0) }; /* up */ } Part of what I'm trying to do is just add proper delete-key function to the text widgets (right now the delete key does nothing at all). So if I add a binding like one of these, nothing happens: bind "Delete" { "delete-selection" () }; bind "Delete" { "delete-from-cursor" (chars, 1) }; The other part of what I want to do is more elaborate. I want to set up something like Vim's command and visual modes. So at the moment I'm just playing around with (a) setting the widget to editable=false by hitting the esc key; and (b) using homerow letters to move the cursor (as a proof-of-concept exercise). So far there's no response from the escape key or from the letter keys, even though the bindings work when I apply them to modifier-key combinations. For example, I do this in the css for the text-widget: bind "j" { "move-cursor" (display-lines, 1, 0) }; /* down */ bind "k" { "move-cursor" (display-lines, -1, 0) }; /* up */ bind "l" { "move-cursor" (logical-positions, 1, 0) }; /* right */ bind "h" { "move-cursor" (logical-positions, -1, 0) }; /* left */ but none of these bindings does anything, even if other bindings in the same set are respected. What's especially odd is that the vim-like movement bindings above are respected when I attach them to a GtkTreeView widget for navigating the tree-view options: @binding-set gtk-vi-tree-view { bind "j" { "move-cursor" (display-lines, 1) }; /* selection down */ bind "k" { "move-cursor" (display-lines, -1) }; /* selection up */ } So it seems like there are limitations or overrides of some kind on keybindings for the TextView widget (and for the del key?), but I can't find documentation of anything like that. Are these just things that can't be done with the css providers? If so, what are my alternatives for non-modified keybindings? Thanks.

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • How can I keep directories in sync

    - by Guillaume Boudreau
    I have a directory, dirA, that users can work in: they can create, modify, rename and delete files & sub-directores in dirA. I want to keep another directory, dirB, in sync with dirA. What I'd like, is a discussion on finding a working algorithm that would achieve the above, with the limitations listed below. Requirements: 1. Something asynchronous - I don't want to stop file operations in dirA while I work in dirB. 2. I can't assume that I can just blindly rsync dirA to dirB on regular interval - dirA could contain millions of files & directories, and terrabytes of data. Completely walking the dirA tree could take hours. Those two requirements makes this really difficult. Having it asynchronous means that when I start working on a specific file from dirA, it might have moved a lot since it appeared. And the second limitation means that I really need to watch dirA, and work on atomic file operations that I notice. Current (broken) implementation: 1. Log all file & directory operations in dirA. 2. Using a separate process, read that log, and 'repeat' all the logged operations in dirB. Why is it broken: echo 1 > dirA/file1 # Allow the 'log reader' process to create dirB/file1: log = "write dirA/file1"; action = cp dirA/file1 dirB/file1; result = OK echo 1 > dirA/file2 mv dirA/file1 dirA/file3 mv dirA/file2 dirA/file1 rm dirA/file3 # End result: file1 contains '1' # 'log reader' process starts working on the 4 above file operations: log = "write file2"; action = cp dirA/file2 dirB/file2; result = failed: there is no dirA/file2 log = "rename file1 file3"; action = mv dirB/file1 dirB/file3; result = OK log = "rename file2 file1"; action = mv dirB/file2 dirB/file1; result = failed: there is no dirB/file2 log = "delete file3"; action = rm dirB/file3; result = OK # End result in dirB: no more files! Another broken example: echo 1 > dirA/dir1/file1 mv dirA/dir1 dirA/dir2 # 'log reader' process starts working on the 2 above file operations: log = "write file1"; action = cp dirA/dir1/file1 dirB/dir1/file1; result = failed: there is no dirA/dir1/file1 log = "rename dir1 dir2"; action = mv dirB/dir1 dirB/dir2; result = failed: there is no dirA/dir1 # End result if dirB: nothing!

    Read the article

  • Windows Phone 7 + Azure.and a couple of nuggets

    I recently gave a talk about Windows Phone 7 at a Bizpark Camp in San Francisco.  The camp had two focuses: Azure and Windows Phone 7.  My presentation covered WP7 portion of the camp.  During my presentation I highlighted the phone platform and talked about some of the differentiators from design, technology and a business standpoint.    Whenever I watch presentations or go to tech meet-ups I feel like I get the most value when I can walk away with a few nuggets that I wouldnt necessarily have known about otherwise.  That said, I tried to add a few resources into my presentation that should be helpful when building WP7 apps.      Nuggets Seeing that the camp was focused on Azure and WP7 I decided to augment my presentation with a code sample.  The intention was to give some insight on how to approach building WP7 applications that talk to Azure.  Some colleges of mine here at Clarity have posted a sample on codeplex focused on getting up and running with WP7 and Azure..you can check it out HERE.   The project is not a hello world app, and is targeted at people who have some experience with the platform and a working knowledge of silverlight. Also, during my presentation I mentioned some limitations with the current phone sdk.  Our sample code on contains work-abounds for the following: #1 Panorama Control #2  Tilt effect #3   Animating Frame #4   Sample architecture (leveraging MVVM light)  and coding patterns.  Note: For the sample phone project we used an azure token that will expire in the next couple of months.  When that happensin the downloads section of the codeplex project there a link to a local development fabric that can be used for local development Presentation Admittedly, the slide deck is pretty design heavy, and doesnt contain much text.  This was semi-intentional to encourage people to come out to the camps and hear it first hand.  There is some additional info found the notes of the PPTX.  Dont forget to check out the full presentation at the Chicago Bizspark Camp on May 21st here at the Clarity Office.  Or on June 4th in  Los Angeles. You can DOWNLOAD the Slides here:  PPTX  |  PDF or view it inline below.  View more presentations from eklimcz. Cheers! Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Phone 7 + Azure.and a couple of nuggets

    I recently gave a talk about Windows Phone 7 at a Bizpark Camp in San Francisco.  The camp had two focuses: Azure and Windows Phone 7.  My presentation covered WP7 portion of the camp.  During my presentation I highlighted the phone platform and talked about some of the differentiators from design, technology and a business standpoint.    Whenever I watch presentations or go to tech meet-ups I feel like I get the most value when I can walk away with a few nuggets that I wouldnt necessarily have known about otherwise.  That said, I tried to add a few resources into my presentation that should be helpful when building WP7 apps.      Nuggets Seeing that the camp was focused on Azure and WP7 I decided to augment my presentation with a code sample.  The intention was to give some insight on how to approach building WP7 applications that talk to Azure.  Some colleges of mine here at Clarity have posted a sample on codeplex focused on getting up and running with WP7 and Azure..you can check it out HERE.   The project is not a hello world app, and is targeted at people who have some experience with the platform and a working knowledge of silverlight. Also, during my presentation I mentioned some limitations with the current phone sdk.  Our sample code on contains work-abounds for the following: #1 Panorama Control #2  Tilt effect #3   Animating Frame #4   Sample architecture (leveraging MVVM light)  and coding patterns.  Note: For the sample phone project we used an azure token that will expire in the next couple of months.  When that happensin the downloads section of the codeplex project there a link to a local development fabric that can be used for local development Presentation Admittedly, the slide deck is pretty design heavy, and doesnt contain much text.  This was semi-intentional to encourage people to come out to the camps and hear it first hand.  There is some additional info found the notes of the PPTX.  Dont forget to check out the full presentation at the Chicago Bizspark Camp on May 21st here at the Clarity Office.  Or on June 4th in  Los Angeles. You can DOWNLOAD the Slides here:  PPTX  |  PDF or view it inline below.  View more presentations from eklimcz. Cheers! Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Am I deluding myself? Business analyst transition to programmer

    - by Ryan
    Current job: Working as the lead business analyst for a Big 4 firm, leading a team of developers and testers working on a large scale re-platforming project (4 onshore dev, 4 offshore devs, several onshore/offshore testers). Also work in a similar capacity on other smaller scale projects. Extent of my role: Gathering/writing out requirements, creating functional specifications, designing the UI (basically mapping out all front-end aspects of the system), working closely with devs to communicate/clarify requirements and come up with solutions when we hit roadblocks, writing test cases (and doing much of the testing), working with senior management and key stakeholders, managing beta testers, creating user guides and leading training sessions, providing key technical support. I also write quite a few macros in Excel using VBA (several of my macros are now used across the entire firm, so there are maybe around 1000 people using them) and use SQL on a daily basis, both on the SQL compact files the program relies on, our SQL Server data and any Access databases I create. The developers feel that I am quite good in this role because I understand a lot about programming, inherent system limitations, structure of the databases, etc so it's easier for me to communicate ideas and come up with suggestions when we face problems. What really interests me is developing software. I do a fair amount of programming in VBA and have been wanting to learn C# for awhile (the dev team uses C# - I review code occasionally for my own sake but have not had any practical experience using it). I'm interested in not just the business process but also the technical side of things, so the traditional BA role doesn't really whet my appetite for the kind of stuff I want to do. Right now I have a few small projects that managers have given me and I'm finding new ways to do them (like building custom Access applications), so there's a bit here and there to keep me interested. My question is this: what I would like to do is create custom Excel or Access applications for small businesses as a freelance business (working as a one-man shop; maybe having an occasional contractor depending on a project's complexity). This would obviously start out as a part-time venture while I have a day job, but eventually become a full-time job. Am I deluding myself to thinking I can go from BA/part-time VBA programmer to making a full-time go of a freelance business (where I would be starting out just writing custom Excel/Access apps in VBA)? Or is this type of thing not usually attempted until someone gains years of full-time programming experience? And is there even a market for these types of applications amongst small businesses (and maybe medium-sized) businesses?

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • Will not supporting IE or older browsers drive away potential visitors/users of my site? [closed]

    - by XToro
    Normally a SO browser but this question doesn't fit there, hopefully it fits here. I just want to ask from web designers' point of view if it's wrong to not care about supporting Internet Explorer or older browsers. The site I'm designing looks great in all browsers except IE9-. There are certain things that IE doesn't support or behave like other browsers; webkit stuff, some CSS styles, drop-and-drop files from OS etc etc, but it all works great in Safari, FireFox, Chrome etc. Should I be that concerned? I know there are several people that use IE, but it's limitations have just been causing me more work by having to come up with workarounds. From what I've read, many of the issues I've been having should be solved with IE10, but not everybody keeps up to date. I know of several people who are still using IE6! Again, I'm hoping this is the right place to ask a question like this, and if not, please point me to the right stack exchange site instead of just downvoting me. Thanks! EDIT: Upon further research.... So far this year, IE(all versions) and Chrome have been neck and neck as the top, with IE only squeaking by Chrome, and FireFox a close 3rd. But looking at the top 10 browsers, IE6 doesn't even show up on this list in which the lowest percentage is 1.92%. Source : http://www.w3counter.com/globalstats.php?year=2012&month=7 Having a look at this other site, IE6 shows up in 11th place out of 12, just before "Other" http://www.sitepoint.com/browser-trends-february-2012/ This makes me a little more wary of not spending more time on IE compatibility. However, my site will not be going to a live beta until October or November, and I'm hoping that IE10 will have more features coded into it. Currently, I've written my upload page which is a "drag-and-drop files from the OS" type to simply display "IE is not supported", leaving no other option for IE users to upload pictures because I've spent so much time writing the uploader which does many things other than just upload the files. I will be changing this kinda cold "Access Denied" to a suggestion to upgrade, or install other browsers, with download links for each. Big thanks for the posts here and the interesting links!

    Read the article

  • Best language on Linux to replace manual tasks that use SSH/Telnet? [on hold]

    - by Calab
    I've been tasked to create and maintain a web browser based interface to replace several of the manual tasks that we perform now. I currently have a "shakey" but working program written in Perl (2779 lines) that uses basic Expect coding, but it has some limitations that require a great deal of coding to get around. Because of this I am going to do a complete rewrite and want to do it "right" this time. My question is this... What would be the best language to use to create a web based interface to perform SSH/Telnet tasks that we would normally do manually? Keep in mind the following requirements: Runs on a CentOS Linux system v5.10 Http will be served by Apache2 This is an INTRANET site and only accessible within our organization. User load will be light. No more that 5 users accessing it at one time. perl 5.8.8, php 5.3.3, python 2.7.2 are available... Not sure what other languages to check for, or what modules might be installed in each language. The web interface will need to provide progress indicators and text output produced by the remote connection, in real time as it is generated. If we are running our process on multiple hosts, they should be in individual threads so that they can run side by side, not sequentially. I want the ability to "trap" on specific text generated by the remote host and display an alert to the user - such as when the remote host generates an error message. I would like to avoid as much client side scripting (javascript/vbscript) as I can. Most users will be on Windows PC's using Chrome or IE as a browser. Users will be downloading the resulting output so they can process it as they see fit. I currently have no experience with "Ajax" or the like. Most of my coding experience is old 6809 assembly, Visual Basic 6, and whatever I can cut/paste from online examples in various languages (hence my "shaky" Perl program) My coding environment is Eclipse for remote code editing, but I prefer stuff like UltraEdit if I can get a decent syntax file for the language I'm using. I do have su access on the server, but I'm not the only one using this server so I can't just upgrade/install blindly as I might impact other software currently running on the machine. One reason that I'm asking here, instead of searching (which I did) is that most replies were, "use language 'xyz', but you need to use an external SSH connection" - like I'm using Expect in my Perl script. Most also did not agree on what language that 'xyz' should be. ...so, after this long posting, can someone offer some advice?

    Read the article

  • [AJAX Numeric Updown Control] Microsoft JScript runtime error: The number of fractional digits is out of range

    - by Jenson
    If you have using Ajax control toolkits a lot (which I will skip the parts on where to download and how to configure it in Visual Studio 2010), you might have encountered some bugs or limitations of the controls, or rather, some weird behaviours. I would call them weird behaviours though. Recently, I've been working on a Ajax numeric updown control, which i remember clearly it was working fine without problems. In fact, I use 2 numeric updown control this time. So I went on to configure it to be as simple as possible and I will just use the default up and down buttons provided by it (so that I won't need to design my own). I have two textbox controls to display the value controlled by the updown control. One for month, and another for year. <asp:TextBox ID="txtMonth" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> <asp:TextBox ID="txtYear" runat="server" CssClass="txtNumeric" ReadOnly="True" Width="150px" /> So I will now drop 1 numeric updown control for each of the textboxes. <asp:NumericUpDownExtender ID="txtMonth_NumericUpDownExtender"     runat="server" TargetControlID="txtMonth" Maximum="12" Minimum="1" Width="152"> </asp:NumericUpDownExtender>                          <asp:NumericUpDownExtender ID="txtYear_NumericUpDownExtender"     runat="server" TargetControlID="txtYear" Width="152"> </asp:NumericUpDownExtender>                                                  You noticed that I configure the Maximum and Minimum value for the first numericupdown control, but I never did the same for the second one (for txtYear). That's because it won't work, well, at least for me. So I remove the Minimum="2000" and Maximum="2099" from there. Then I would configure the initial value to the the current year, and let the year to flow up and down freely. If you want, you want write the codes to restrict it. Here are the codes I used on PageLoad:     Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load         If Not Page.IsPostBack Then             If Trim(txtMonth.Text) = "" Then                 Me.txtMonth.Text = System.DateTime.Today.Month             End If             If Trim(txtYear.Text) = "" Then                 Me.txtYear.Text = System.DateTime.Today.Year             End If         End If     End Sub   Enjoy!

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • HTTP Error 503 - Service is unavailable (how fix?)

    - by SilverLight
    i have a web site for download mobile files and there many users in my web site. sometimes i have the error below : HTTP Error 503 - Service is unavailable 1-so why this error happens and what is that mean? 2-as i know appache free up itself when it's oveloaded, but what about iis? how can i put some limitations in my server (i have remote access to my server) for prevent this error happening? a.is limitation of dowload's speed efficient for prevent that error's occur? how can i do that? is squid useful for this job or i can do that with another iis extension. b.is limitation of download's Bandwidth efficient for prevent that error's occur? how can i do that (with iis or another extension)? in right side of iis - configure area - i found some limits. what do those limits mean and can i use them for keep my server alive all the time? EDIT: after viewing event viewer of windows - custom views - server rols - web server (iis) i figure out there is no error in that area. but many warnings and information. the latest warnings and information are like below : warning A worker process '2408' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '6764'. warning A worker process '3232' serving application pool 'ASP.NET 4.0 (Integrated)' failed to stop a listener channel for protocol 'http' in the allotted time. The data field contains the error number. warning A process serving application pool 'ASP.NET 4.0 (Integrated)' exceeded time limits during shut down. The process id was '3928'. thanks in advance best regards

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Strange performance issue with Dell R7610 and LSI 2208 RAID controller

    - by GregC
    Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec Lowly PCIe x4 slot yields steady 1.2 GB/sec read Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases. SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected. I'd like to be able to rely on the sheer speed of x16 slots. What gives? What can I try? Any ideas? Please assist Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx Follow-up information We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}. The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array. Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • Setting up Squid -> VPN connection

    - by Nedlinin
    I recently purchased a VPS and am wanting to use it as a VPN server. However, it has bandwidth limitations. So, I figured since I already have a local Squid proxy caching things for me, I could have users connect to the proxy and the proxy connect to the VPN. Then when someone hits the web, Squid will serve it from cache if available and, if not, it will use the VPN to download it. My issue is, I have no idea how to set this up :p - Essentially I want Machine - Squid - VPN. My VPN is running on Ubuntu Server with pptpd. Squid is running on a local Arch Linux box. Squid and the VPN are both working perfectly independently. Any help on how to have Squid push traffic through the VPN would be greatly appreciated! Also: I don't actually want to use the VPN for all traffic. Otherwise, I'd just connect my router to the VPN and be happy. I only want to use it for web traffic from specific machines on the network.

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • Officially announced RAM support size doesn't apply to one of twin rigs with just one difference

    - by Deniz
    It'll take a little long to describe my situation but here goes the story : In January 2009 we bought (the OEM parts) two similar systems with just one difference. One of them had a Phenom X4 cpu and the other one (mine) a Phenom X3 cpu. At the beginning we had problems with both systems to power them on whilst having all of their ram slots being full. We decided to install the systems with just 2 slots populated and later try to install the rest of ram sticks. Both systems did succeed to support 3 sticks. We tried many different procedures to make the systems work with their fourth ram slots being populated. We waited for new bios updates and flashed the boards when they were available, we tried different ram sticks with different frequencies etc. One day while we were trying to install the fourth stick, the X4 machine did accept it. The other one did not. The most mind boggling thing was that after one of my trials the X3 system begun to not operate with the third slot populated. Our boards did have AMD 770 chipsets and we even tried to change the board of the X3 machine with another 770 chipset board. Now my questions are : Should we change the cpu ? What is causing the X3 system to not accept the fourth (or now the third) ram stick ? The manufacturers sites do claim that this boards do accept 4 ram sticks (but they only tested them with certain ram brands and models). What are the limitations for maximum ram configurations on motherboards ? Are there some "rules of thumb" except frequency, voltage, chip type considerations for which we did check our parts ? Our boards are : Gigabyte GA-MA770-DS3 Sapphire PC-AM2RX780 - PURE CrossFireX 770

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • What determines what resolutions a laptop is willing to output over VGA?

    - by Joshua McKinnon
    I'm responsible for several conference rooms and have setup 1080p projectors and I provide both HDMI and VGA connectivity. HDMI for DisplayPort and Mini-DisplayPort, and VGA as a fallback, universal option. Contrary to what I expected, people seem to have much more trouble with the HDMI than VGA, so VGA gets used a lot more than you'd think (even as most workstation laptops made in the last 3-4 years have DisplayPort or Mini-DisplayPort...). Also to my surprise, VGA outputs over 1080p on a 50ft cable run with very minimal degradation on certain laptops - other laptops just don't offer 1080p as a resolution choice and top out at 1600x1200 or something else. Specific example: a ThinkPad W530 will do 1080p, a W520 won't, over VGA. (both do 1080p over displayport/mini-DP) What determines what resolutions a laptop is willing to output over VGA? I'm thinking this will come down to either a video driver that says it supports only certain resolutions for output, or limitations of the RAMDAC (which wouldn't be in play, at least DAC wise, on a digital output, but WOULD on VGA, an analog output). The basic reason for the question is that I noticed, say, a ThinkPad W520 with 1080p built in display, will output 1080p fine over DisplayPort to a 1080p projector, but will cap out at 1600x1200 (practically the same pixel count, just a little shy) on VGA. Now, this wouldn't be surprising at all except SOME laptops have no issue outputting 1080p over VGA, even with lower native resolutions. Why do I care? Well if there's some way I could enable it... for situations where my users end up using VGA anyway, it's preferable for display mirroring if they can output their laptop's native resolution, which, you guessed it, is very often 1080p on 15" models. DISCLAIMER: This is primarily a curiosity, I'm not claiming 1080p over VGA is ideal by any means, but hey, if it works. I've seen HDMI start artifacting more over same-length, same gauge cabling (up to 50' run in certain rooms). If you think this is better suited to SuperUser, please move it, but this is framed from an IT standpoint of something that affects a real pool of users in a multiple conference room, 50+ deployed laptop scenario.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >