Search Results

Search found 4045 results on 162 pages for 'automatic maintenance'.

Page 111/162 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Code behind methods vs. Jquery AJAX calls

    - by punkouter
    Theres a war brewing I can feel it! Old school coders are used to having every server control create events in the .cs files.. for example.. Getting the Initial load of data, Saving Data, Deleting data... and then binding datasources to the server control.. New school coders want to do it in Jquery + AJAX calls to .svc files... That gives automatic no post backs so that is a advantage... and I think its a different way of thinking.. All of a sudden the UI related events are all being done in Jquery.. What is the most modern and efficient way to go ? How can I convince the old school coders to let us you this new paradigm ? (assuming it is the better way)

    Read the article

  • How can I tell NetBeans to use the latest available version of a JAR for a library?

    - by Freiheit
    I have a Netbeans project with a library defined which includes several JARs. These JARs are versioned like lib\blah\com.blah.wibble.jar_0.6.1.201004161543 . These are nightly builds from another project so that version changes often. I know I can point NetBeans at the specific JARs with the version name, but this means that every time I get a new version I have to update the NetBeans library. I can also strip the version name from the JARs, but this makes it hard to track down bugs. "What version of the blah JARs?" is usually the second thing we ask when we find a bug. Is it possible to tell Netbeans to use included com.blah.wibble.jar_[??????] where ???? is some sort of automatic pointer to use the latest available version?

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • C# two classes with static members referring to each other

    - by Jerry
    Hi, I wonder why this code doesn't end up in endless recursion. I guess it's connected to the automatic initialization of static members to default values, but can someone tell me "step by step" how does 'a' get the value of 2 and 'b' of 1? public class A { public static int a = B.b + 1; } public class B { public static int b = A.a + 1; } static void Main(string[] args) { Console.WriteLine("A.a={0}, B.b={1}", A.a, B.b); //A.a=2, B.b=1 Console.Read(); }

    Read the article

  • I have enabled hidden administrator in Win 7 home, but programs still dont work.

    - by Angela
    I have Windows 7 Home Premium, and would like to do some maintenance tasks such as running Disk Defragmenter. However, this and other programs and applications that I'm accustomed to using are now blocked. For these programs, there is a shield icon next to their icons and nothing happens when I click on them. I notice that the screen blinks slightly, but I do not get prompted for a password and the program still does not run. It seems these programs may only be accessible through an Administrator account. However, right-clicking and selecting "Run As Administrator" does not work. After some research, I found a way to enable the hidden built-in Administrator account. I booted the computer into safe mode. In the command prompt, I typed net user administrator /active:yes. I gave the account a password. I rebooted the system. There is now an Administrator account on the home screen. However, the locked programs behave no differently for me when I use this account. What could cause this problem? How can I fix it?

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • How do I fix "error 1004, 0, Unable to find property" in an Entity Framework 4 WinForms application?

    - by Ivan
    I've designed an EF4 model (quite complex inheritance, lots of small tables incl. multiple self-referencing), generated (table-per-type) a database and inserted some basic data manually. It works fine in an ASP.Net Dynamic Data Entities web application with full automatic scaffolding. But when in a WinForms application using the same model (I share it as a part of a class library) I construct a query and bind a combo box to it (the way it's shown here), I get an InnerException {"Internal .NET Framework Data Provider error 1004, 0, Unable to find property... I've found a question about the same problem here (incl. a sample to reproduce the error) but no answer. I use final Visual Studio 2010, no beta.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • How do I give a jQuery Element a fixed position on the page. In other words absolute positioning of a jQuery element.

    - by Stephanie
    <script type="text/javascript"> $(function() { $('a.StackedSystem').hover(function(e) { var html = '<div id="StackedSysteminfo">'; html += '<div id="StackedSystemTxt"> ETTER utilizes the latest technologies for our booster systems, including PLC-Based controls complete with touch-screen panel user interfaces (HMI). The base package includes the gray scale screen as shown; color screens are also available. The PLC not only provides a cleaner interface but provides additional features like automatic logging and time/date stamping of all alarms and shut-downs. Great for trouble-shooting.'; html += ''; $('body').append(html).children('#info').hide().fadeIn(400); }, function() { $('#StackedSysteminfo').remove(); }); });

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • MVC2 IModelBinder and parsing a string to an object - How do I do it?

    - by burnt_hand
    I have an object called Time public class Time{ public int Hour {get;set;} public int Minute {get;set;} public static Time Parse(string timeString){ //reads the ToString()'s previous output and returns a Time object } override protected string ToString(){ //puts out something like 14:50 (as in 2:50PM) } } So what I want is for the automatic model binding on the Edit or Create action to set this Time instance up from a string (i.e. feed the Parse method with the string and return the result). The reason I am doing this is that I will have a DropDownList with selectable times. The value of each option will be the parser readable string. Can anyone provide an example BindModel method from the IModelBinder interface?

    Read the article

  • Software version numbering with GIT

    - by revocoder revocorp
    Short Want to set automatic (or at least semi-auto) software version numbering in GIT Detailed I'm newby to GIT. Recently created some bare git repo and made some commits and pushes into it. Want to set some starting version number (like v1.0) to my project. I know, there is tag for this reason. Googled it and found bunch of materials. For example: git - the simple guide blog says: You can create a new tag named 1.0.0 by executing git tag 1.0.0 1b2e1d63ff the 1b2e1d63ff stands for the first 10 characters of the commit id you want to reference with your tag. Kudelabs says: $ git tag -a 'milestone1' -m 'starting work for milestone 1, due in 2 weeks' $ git push --tags I'm really confused. What is difference between first and second method: git tag and git tag-a. Can't figure out which to use for this purpose. How can I set version number in bare remote repo, to which I made 5-6 commits and pushes?

    Read the article

  • Can I update an iOS Enterprise App in the background like an App Store app can?

    - by lehn0058
    I have an iOS enterprise app that we are wirelessly distributing to our devices. Currently the app polls our server once a day to see if there is an app update. If there is, we try to install it by having the app call the following code: NSURL *installUrl = [NSURL URLWithString:[NSString stringWithFormat:@"itms-services://?action=download-manifest&url=%@", plistUrl]]; [[UIApplication sharedApplication] openURL:installUrl]; This causes the app to prompt the user with an alert dialog to install the update. If they click install, the app closes and the update is downloaded and installed. I am wondering if there is anything for enterprise apps for iOS 7 similar to the AppStore's automatic updates? I would like to be able to update our app without the user having to press an update button and be able to update at a time when the user won't have to wait for it to install.

    Read the article

  • Blogger Code Image linking to post page

    - by Jm Agas
    Is this possible to achieve in blogger? My goal is to make Static page images to become clickable and link it to the actual post page. I know its possible by editing each post but I want to make it automatic. For example: In 9gag.com when you click the image from the homepage it will actually link you to the post page. I want to do the same but in blogger. Something like this <b:if cond='data:blog.pageType != &quot;static_page&quot;'><a expr:href='data:post.url'><static page images></a></b:if> Screenshot: http://i.stack.imgur.com/YAWkL.jpg

    Read the article

  • Html dynamically repeated border-image

    - by Clox
    I have a table which border I want to have a sort of zig-zag shape. I want the table to have an automatic size; resizing depending on how big the browser is. But rrathe than just having an image that gets stretched I want a seamless image that gets repeat instead. I found out this can be done with CSS3's Border-image but by looking and Browser Statistics I can see than only about half of all the viewers will be able to see it since no version of IE does yet support it. So I'm looking for an alternate method. What would be the best way of doing it? Thanks in advance!

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Efficient storage/retrieval method for replayable comet style applications (Google Wave, Etherpad)

    - by Gareth Simpson
    I am considering a web application that would have the same kind of multi user, automatic saving, infinite undo / replay capabilities that you see in Google Wave and Etherpad (albeit on a drastically smaller scale and userbase). Before I go away and reinvent the wheel, is this something that has already been addressed as either a piece of technology or library, or even just a design pattern. I know this isn't necessarily the best Stack Overflow question as there is probably not a "right" answer, but my Google-fu has failed me and I'd just like a reading list! Ordinarily I would be developing under python/django but this is not a firm requirement just a preference :)

    Read the article

  • Where to learn about VS debugger 'magic names'

    - by Gael Fraiteur
    If you've ever used Reflector, you probably noticed that the C# compiler generates types, methods, fields, and local variables, that deserve 'special' display by the debugger. For instance, local variables beginning with 'CS$' are not displayed to the user. There are other special naming conventions for closure types of anonymous methods, backing fields of automatic properties, and so on. My question: where to learn about these naming conventions? Does anyone know about some documentation? My objective is to make PostSharp 2.0 use the same conventions. Thank you!

    Read the article

  • Passing parameters into ViewModels (Prism)

    - by vXtreme
    Hi I can't figure out how to pass parameters to my viewmodels from other views or viewmodels. For instance, I have a View called Customers. There is a grid inside, and if you double-click the grid, a new view is supposed to come up and allow you to edit that customer's data. But how will the View(Model) responsible for editing data know which customer it's supposed to open if I can't pass any parameters inside? EventAggregator is out of the question because I obviously can't create hundreds of eventargs, each for one view. And besides, it's a lousy solution. So far I was able to come up with: CustomerDataView custView = new CustomerDataView(customerId, currentContext); manager.Regions[RegionNames.Sidebar].AddAndActivate(custView); What do you think about this particular solution? Is this the way it's normally done? What I don't like about this is the fact that I lose out on automatic dependency injection by Unity.

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • How to deploy and configure many copies of an application to multiple domains on the same server

    - by Oren
    We are about to begin work on an application that will eventually be deployed many times on one server. I am hoping to build a nice interface so that one of my coworkers can easily create new deployments of this application. The idea is to create a wizard with a series of options that will configure basic properties of each particular copy of the app such as color scheme, domain name, etc. Each copy of the application may be further tweaked independently down the line. I would like to know what is the best way to manage the automatic creation of users, the updating of domain name info and the deploying of copies of an application, with the ability to maintain certain discrepancies between each of these copies (such as installed plugins, different CSS) as we update the application in the future. What I'm asking is extremely similar to the way StackExchange 1.0 functioned, where a user could configure several options and a customized version of the StackExchange would soon be up and running. How is this accomplished?

    Read the article

  • Office365 Exchange: Cannot open shared two calendars in Outlook

    - by Mark Williams
    The problem: Outlook won't open the calendars on another user's mailbox and and a room mailbox, even when users have permission. Note: This problem is affecting more than one account on more than one machine. So I have a room mailbox and a personal mailbox on Exchange, both with shared calendars. There is a security group called "Scheduling Users" that have editor rights on both of these calenders. The room mailbox was created using PowerShell, per the instructions posted online (http://help.outlook.com/140/ee441202.aspx). Sharing worked on both of these folders initially. Users can still access these folders using OWA. So on to the problem. When users try to open these calendars in Outlook they receive one of the following messages. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. Cannot open this item. Cannot open the free/busy information. The attempt to log on to Microsoft Exchange has failed. What I have tried so far: Resetting the permissions on both of the mailboxes. I deleted the security group permissions on both mailboxes, applied the change, then waited a bit and gave the permissions back. Deleted the OST file of the shared calendar from the Outlook data directory That is all I have been able to find online. Any thoughts? I have been going back and forth with the Office365 support folks for a while and they seem stumped too.

    Read the article

  • simplemodal or my brain bug?

    - by g0sha
    Sorry for my Eng. I`m trying to use simplemodal in my project authorization form, but here is a little trouble: <javascript> function usr_init() { $('div#usrinfo').html("test"); } </javascript> <html> <a href="#" onClick='$("#authdiv").modal();'>TEST!</a> </html> In authdiv I have a form with onSubmit="usr_init()"; But after automatic close #usrinfo changes to previous value. What to do with this problem?

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >