Search Results

Search found 1702 results on 69 pages for 'scratch'.

Page 63/69 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • I have to do two seemingly mutually exclusive things on leaving an asp:textbox. Please help me get

    - by aape
    This project has gone from being a simple '99 Ford F-150 to the Homer. I've got controls with a gridview with textboxes for data entry. All the user controls on the pages are in AJAX updatepanels. User types in a database column or budget entity or some other financial thing they want to include in the report. The textboxes in the gridview have autopostback = true set. overly long background info When the user leaves the textbox, during the postback (triggered by onTextChanged) I do some validation back on the server on their entry - regexs, do they have rights to that column, is that column locked, etc. If it fails, I put a error message next to the textbox. If it passes, I wipe out any title or error that used to be next to the code. Focus is getting lost from the postback if they're tabbing out of the box, rather than going to the next textbox in the gridview. So to fix that I need, if their leaving the tb via the tab key, to also figure out what textbox or gridviewrow they're on, if they're not on the last row, and after the validation and labeling, put the focus on the textbox in the next row. I can't figure out how, in ontextchanged, to find what caused me to leave the textbox, so I'm thinking use javascript onkeyup to test the key pressed and then find the next box etc, but the ontextchanged fires first and then the js never does, and also, since the control is all AJAXed, the javascript can't find the textboxes because when you enter the page everything is collapsed (the requirements people loooove to collapse and expand things), and so when it's expanded, all the 'new' textboxes are up in the viewstate stuff in the page source, and not down where javascript can see them. The questions So I'm wondering if I can have an onblur in the javascript that can trigger a postback where I can do my validation and such, and either 1) include the keypressed or pick it out of sender in the event or 2) followup the onblur with onkeyup and somehow figure out what textbox is next on the grid and throw focus there. Or, is there another .NET based approach that could work for this? In terms of tearing the whole thing down and starting from scratch, I couldn't sell that to the bosses, I'm past the point of no return as far as that goes.

    Read the article

  • Custom cell in list in Flash AS3

    - by Stian Flatby
    I am no expert in Flash, and I need some quick help here, without needing to learn everything from scratch. Short story, I have to make a list where each cell contains an image, two labels, and a button. List/Cell example: img - label - label - button img - label - label - button As a Java-programmer, I have tried to quicly learn the syntax and visualness of Flash and AS3, but with no luck so far. I have fairly understood the basics of movie clips etc. I saw a tutorial on how to add a list, and add some text to it. So I dragged in a list, and in the code went list.addItem({label:"hello"}); , and that worked ofc. So i thought if I double-clicked the MC of the list, i would get to tweak some things. In there I have been wandering around different halls of cell-renderers etc. I have now come to the point that I entered the CellRenderer_skinUp or something, and customized it to my liking. When this was done, I expected i could use list.addItem(); and get an empty "version" of my cell, with the img, labels and the button. But AS3 expects an input in addItem. From my object-oriented view, I am thinking that i have to create an object of the cell i have made, but i have no luck reaching it.. I tried to go var test:CellRenderer = list.listItem; list.addItem(test); ..But with no luck. This is just for funsies, but I really want to make this work, however not so much that I am willing to read up on ALOT of Flash and AS3. I felt that I was closing in on the prize, but the compiler expected a semicolon after the variable (list.addItem({test:something});). Note: If possible, I do NOT want this: list.addItem({image:"src",label:"text",label"text",button:"text"}); Well.. It actually is what I want, but I would really like to custom-draw everything. Does anyone get what I am trying to do, and has any answers for me? Am I approaching this the wrong way? I have searched the interwebs for custom list-cells, but with no luck. Please, any guiding here is appreciated! Sti

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • JPA - Can I create an Entity class, using an @DiscriminatorValue, that doesn't have its own table?

    - by DaveyDaveDave
    Hi - this is potentially a bit complex, so I'll do my best to describe my situation - it's also my first post here, so please forgive formatting mistakes, etc! I'm using JPA with joined inheritance and a database structure that looks like: ACTION --------- ACTION_ID ACTION_MAPPING_ID ACTION_TYPE DELIVERY_CHANNEL_ACTION -------------------------- ACTION_ID CHANNEL_ID OVERRIDE_ADDRESS_ACTION -------------------------- ACTION_ID (various fields specific to this action type) So, in plain English, I have multiple different types of action, all share an ACTION_MAPPING, which is referenced from the 'parent' ACTION table. DELIVERY_CHANNEL_ACTION and OVERRIDE_ADDRESS_ACTION both have extra, supplementary data of their own, and are mapped to ACTION with a FK. Real-world, I also have a 'suppress' action, but this doesn't have any supplementary data of its own, so it doesn't have a corresponding table - all it needs is an ACTION_MAPPING, which is stored in the ACTION table. Hopefully you're with me so far... I'm creating a new project from scratch, so am pretty flexible in what I can do, but obviously would like to get it right from the outset! My current implementation, which works, has three entities loosely defined as follows: @Entity @Table(name="ACTION") @Inheritance(strategy=InheritanceType.JOINED) @DiscriminatorValue("SUPPRESS") public class Action @Entity @Table(name="DELIVERY_CHANNEL_ACTION") @DiscriminatorValue("DELIVERY_CHANNEL") public class DeliveryChannelAction extends Action @Entity @Table(name="OVERRIDE_ADDRESS_ACTION") @DiscriminatorValue("OVERRIDE_ADDRESS") public class OverrideAddressAction extends Action That is - I have a concrete base class, Action, with a Joined inheritance strategy. DeliveryChannelAction and OverrideAddressAction both extend Action. What feels wrong here though, is that my Action class is the base class for these two actions, but also forms the concrete implementation for the suppress action. For the time being this works, but at some point more actions are likely to be added, and there's every chance that some of them will, like SUPPRESS, have no supplementary data, which will start to get difficult! So... what I would like to do, in the object model world, is to have Action be abstract, and create a SuppressAction class, which is empty apart from having a @DiscriminatorValue("SUPPRESS"). I've tried doing exactly what is described above, so, changing Action to: @Entity @Table(name="ACTION") @Inheritance(strategy=InheritanceType.JOINED) public abstract class Action and creating: @DiscriminatorValue("SUPPRESS") public class SuppressAction extends Action but no luck - it seems to work fine for DeliveryChannelAction and OverrideAddressAction, but when I try to create a SuppressAction and persist it, I get: java.lang.IllegalArgumentException: Object: com.mypackage.SuppressAction[actionId=null] is not a known entity type. at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:4147) at org.eclipse.persistence.internal.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:368) at com.mypackage.test.util.EntityTestUtil.createSuppressAction(EntityTestUtil.java:672) at com.mypackage.entities.ActionTest.testCRUDAction(ActionTest.java:27) which I assume is down to the fact that SuppressAction isn't registered as an entity, but I don't know how I can do that, given that it doesn't have an associated table. Any pointers, either complete answers or hints for things to Google (I'm out of ideas!), most welcome :) EDIT: to correct my stacktrace.

    Read the article

  • Newbie jQuery question: Need slideshow to rotate automatically, not just when clicking navigation.

    - by Justin
    Hi everyone, This is my first post, so please forgive me if this question has been asked a million times. I'm a self professed jQuery hack and I need a little guidance on taking this script I found and adapting it to my needs. Anyway, what I'm making is an image slide show with navigation. The script I found does this, but does not automatically cycle through the images. I'm using jQuery 1.3.2 and would rather stick with that than using the newer library. I would also prefer to edit what is already here rather than start from scratch. Anywho, here's the html: <div id="myslide"> <div class="cover"> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/current_Denver-skyline.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/pepsi_center-IS42RF-0D111C.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/columbine-2689820469_D1104.jpg" /> </div> <div class="mystuff"> <img alt="&nbsp;" src="http://www.mfhc.com/wp/wp-content/uploads/2010/05/ist2_10460354-RedRocks.jpg" /> </div> </div> <!-- end of div cover --> </div> <!-- end of div myslide --> And here's the jQuery: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script type="text/JavaScript"> $(document).ready(function (){ $('#button a').click(function(){ var integer = $(this).attr('rel'); $('#myslide .cover').css({left:-820*(parseInt(integer)-1)}).hide().fadeIn(); /*----- Width of div #mystuff (here 820) ------ */ $('#button a').each(function(){ $(this).removeClass('active'); if($(this).hasClass('button'+integer)){ $(this).addClass('active')} }); }); }); </script> Here's where I got the script: http://www.webdeveloperjuice.com/2010/04/07/create-lightweight-jquery-fade-manual-slideshow/ Again, if this question is too basic for this site please let me know and possibly provide a reference link or two. Thanks a ton!

    Read the article

  • ODBC: when is the best time to create my database?

    - by mawg
    I have a windows program which generates PGP forms which will be filled in later. Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC. And, yes, it does have to be a windows program. There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name. Let's call that design time and run time. At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports. Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so? 1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ... 2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;' 3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database. I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance. Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • [C] Programming problem: Storing values of an array in one variable

    - by OldMacDonald
    Hello, I am trying to use md5 code to calculate checksums of file. Now the given function prints out the (previously calculated) checksum on screen, but I want to store it in a variable, to be able to compare it later on. I guess the main problem is that I want to store the content of an array in one variable. How can I manage that? Probably this is a very stupid question, but maybe somone can help. Below is the function to print out the value. I want to modify it to store the result in one variable. static void MDPrint (mdContext) MD5_CTX *mdContext; { int i; for (i = 0; i < 16; i++) { printf ("%02x", mdContext->digest[i]); } // end of for } // end of function For reasons of completeness the used struct: /* typedef a 32 bit type */ typedef unsigned long int UINT4; /* Data structure for MD5 (Message Digest) computation */ typedef struct { UINT4 i[2]; /* number of _bits_ handled mod 2^64 */ UINT4 buf[4]; /* scratch buffer */ unsigned char in[64]; /* input buffer */ unsigned char digest[16]; /* actual digest after MD5Final call */ } MD5_CTX; and the used function to calculate the checksum: static int MDFile (filename) char *filename; { FILE *inFile = fopen (filename, "rb"); MD5_CTX mdContext; int bytes; unsigned char data[1024]; if (inFile == NULL) { printf ("%s can't be opened.\n", filename); return -1; } // end of if MD5Init (&mdContext); while ((bytes = fread (data, 1, 1024, inFile)) != 0) MD5Update (&mdContext, data, bytes); MD5Final (&mdContext); MDPrint (&mdContext); printf (" %s\n", filename); fclose (inFile); return 0; }

    Read the article

  • Windows 7 Boot to VHD using a VHD clone of the system drive

    - by daveh551
    This seems like a not too difficult problem, and, after several hurdles, I'm maddeningly close. But I can't quite get there. I'm running Windows 7 in development shop. I want to start using VS2010 to work on some stuff that won't be released for awhile. My boss said no beta code on the production machine, but I could run VS2010 for this project IF I could do it in an isolated environment, like a virtual PC. Well, I've used the beta and RC of Win7 on VPC's before, and it was painfully slow because of the VPC environment. But everyone has been singing the praises of Windows 7's boot-to-VHD capability, where only the disk is virtualized, and you're actually running on the hardware. Supposed to be little slower, but nowhere near the speed penalty of VPC. I've spent a fair amount of time getting everything installed the way I want it. So I figured, I'll just clone my system drive using Disk2VHD, and boot off of that, and then install VS2010 onto that. (I keep most of my user data, including all my projects, in a separate partition, so that wouldn't have to be duplicated and would still be available.) Well, I had some difficulties with that, owing mainly to the fact that I was using an old version of Disk2VHD - (get the latest if you're going to try it.) But I did finally get it to boot. (Scott Hanselman has a good blog post on boot to VHD). But it wasn't exactly what I was expecting or hoping for. What I expected was that the VHD would become the C: drive, and the original (physical) C: drive would be either hidden or mounted under a different letter, and thus isolated and protected from any changes. What you actually get is that the VHD becomes the D: drive AND you boot from the D: drive, BUT your original C: drive is still there. Which is sort of okay EXCEPT that the Registry on the VHD is a clone of the Registry on C: drive, and includes many hard-coded references to C:. So the result is that some things come from (and modify) D: (the VHD), but some things come from (and modify) C:. (If you open a cmd prompt and do a SET to look at your environment variables, you will see a mixture of D:\ and C:\ paths.) So I don't really have an isolated environment. Most importantly, %ProgramFiles% is still set to C:\Program Files. What I really need is a tool that can access the registry files on the mounted VHD AS FILES, not as registry entries, and do a global search and replace on all the C:\ in strings to D:. I haven't found such a program. (I've tried to do it with a program called Registry Replace, but, even when running as Administrator, there are certain entries that the Registry won't let you change.) Does anyone know of one? Or any other solution to my problem (other than starting from scratch with a clean VHD and installing Win7 and all my programs on it.)?

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • Windows 7 .NET 3.5.1 - 2.0 Slightly Corrupted, How to Repair?

    - by Quinxy von Besiex
    My Windows 7 included .NET installation (3.5 to 2.0) appears very slightly and particularly corrupted and I am trying to fix it without reinstalling Windows or trying to revert to backups. Everything was working and then my hard drive started corrupting a few files and checkdisk found bad clusters so I imaged the drive to a new one. As soon as I booted on the new drive everything worked except programs which call the System.Net.NetworkInformation methods within .NET 3.5 to 2.0 (like Ping() and IsNetworkAvailable()), which immediately crash the app in which the calls are (those calls in .NET 4.0 works fine). Those methods are found inside System.dll, and I assume call native methods which I believe are inside winnsi.dll or iphlpapi.dll or something else (I've not found this yet); I assume it calls native methods because the exception which causes the crash is Fatal Execution Engine Error which people mention is usually related to calling native methods and marshaling data between them. A huge clue about the culprit is likely found in the fact that when I launch the exact same crashing application through a code profiler (which executes the exe and captures stats on which methods took the longest) the app works fine, no crash at all! How could running it within the profiler work and running it outside not work? That seems the key to the mystery. I've used procmon to catch all the registry, filesystem, and network events from the crashing execution and the profiler-run successful execution and compared the two outputs but didn't learn much (I see the moment at which the non-profiled app crashes, but up until then they behave the same, loaded the same modules, ). The only big difference seems to be that at the moment before the app crash the profiler-executed code creates 4-6 new threads and the directly executed code only creates 1-2. I have diffed the files/directories which seemed most relevant (the .NET stuff under Windows and Program Files) pre- and post- disk trouble and seen no changes where I didn't expect any (no obvious file corruption). I have diffed the software and system registry hives pre- and post- disk trouble and seen no changes which seemed relevant. I have created a new user account and cleaned up any environment variables in case environment was related. No change. I did "sfc /scannow" and it found no integrity problems. I tried "ngen update" to regenerate pre-compiled code in case I missed something that might be damaged and nothing changed. I assume I need to repair my .NET installation but because Windows 7 included .NET 3.5 - 2.0 you can't just re-run a .NET installer to redo it. I do not have access to the Windows disks to try to re-install Windows over itself (the computer has a recovery partition but it is unusable); also the drive uses a whole-disk encryption solution and re-installing would be difficult. I absolutely do not want to start from scratch here and install a fresh Windows, reinstall dozens of software packages, try and remember dozens of development-related customizations/etc. Given all that... does anyone have any helpful advice? I need .NET 3.5 - 2.0 working as I am a developer and need to build and test against it. Thanks! Quinxy

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Intel Rapid Storage / Smart Response SSD caching issue

    - by goober
    Background Recently built my own PC. It works! Almost. It's been a while since getting into the guts of these things, so I'm familiar but may be missing something simple. FYI, I don't care about blowing the OS away -- it's brand new and we can go back from scratch as many times as necessary. Goal / Issue I'd like to use the SSD to take advantage of Intel's Smart Response technology (allows the SSD to act as a cache for HDDs) I would like the SSD cache to act as a cache for my HDDs, which I would like to be in a RAID1 array (so I get the speed from the SSD and the redundancy from the RAID1) However, Windows only sees the drive in device manager (not as a drive), so I'm unsure what to do about that. Related: as far as I know, for this to work, the drives all have to be in a single RAID array (i.e. a RAID0 pairing of the SSD and the RAID1 HDD array). However, when attempting this at the BIOS level, I am told there is not enough space for an array. Steps so Far Moved the SSD onto the Intel controller (I'd had it on the Marvel 6.0 controller instead of the Intel controller, so the BIOS was only seeing it in a strange way) Updated the BIOS of the motherboard to the latest version Reinstalled Intel's RST (iRST?) software several times, as some forums reported it working after reinstalling 3 times (which does not inspire confidence). Checked Intel storage: it does see the SSD as a physical, non-RAID device. However, it says no space exists if I try to create an array. Checked the BIOS: it does not show up in the boot order, but is an option that can be selected under boot options. Tried the firmware update for that model. Issue: the firmware CD doesn't detect a drive; maybe the Intel storage controller is making it difficult? moved the ssd to the marvel controller. The firmware update cd appeared to hang while searching for drives. swapped out the SATA cable for the manufacturer's and moved back to the intel storage controller. Noticed at this point that in the Intel RST software, a device DOES show up in addition to the RAID set -- only shown as a "60 GB internal disk". Windows doesn't appear to see it as a drive, but it does still show in device manager. Move SSD to port from 0-3 on MOBO and set SATA mode to IDE (after disconnecting RAID1 config) to allow the firmware update to work. Firmware was already at the latest version. Next Steps ? Components involved ASUS P8Z68-V PRO motherboard (Intel Z68 Chipset) Intel i7 2600k Processor 2 x 1TB 7200 RPM HDDs 64 GB Crucial M4 SSD (M4-CT064M4SSD2) For Reference -- Storage Configuration Intel 3 gbps Intel 3gbps Intel 6gbps Marvel 6gbps +----------+ +----------+ +----------+ +----------+ | | <----+ | | +-+ | | | |----------| | |----------| |-|--------| |----------| | | | | + | | | | | | +----------+ | +--|-------+ +-|--------+ +----------+ | | | + v v | 1 TB HDD 64 GB SSD + +> 1 TB HDD For Reference -- Intel RST (v10.8.0.1003) Screenshot Don't mind the "rebuilding" -- knocked a power cable out at one point; it's doing its job, not an indicator of a bad HDD. Any thoughts? Thanks in advance for any help!

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Cannot open simple script application on mac

    - by streetpc
    Mac OS X 10.6 I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • DRBD not syncing between my nodes when IP is reset

    - by ramdaz
    I am trying to setup DRBD by following the article at http://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-ubuntu-11.10-p2 I am using Ubuntu 10.04 DRBD - 8.3.11 In the first run I had everything working perfectly and when shifting the systems to a production environment I decided to restart the Meta Data creation part and start from scratch. The IPs had changed entirely in the production environment. Issuing drdbadm create-md r0 in both the servers runs successfully. But when I do "drbdadm -- --overwrite-data-of-peer primary all" on the primary it fails to start the re sync. My config file is as given below resource r0 { protocol C; syncer { rate 50M; } startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "aklsadkjlhdbskjndsf8738734jkfkjfkjf"; } on primaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.1:7788; meta-disk internal; } on secondaryds { device /dev/drbd0; disk /dev/md2; address 172.16.7.3:7788; meta-disk internal; } } Status on primary root at primaryds:~# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at primaryds, 2012-05-12 15:08:01 0: cs:WFBitMapS ro:Primary/Secondary ds:UpToDate/Inconsistent C r---- ns:0 nr:0 dw:0 dr:200 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Status on secondary root at secondaryds:/etc/drbd.d# cat /proc/drbd version: 8.3.7 (api:88/proto:86-91) GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root at secondaryds, 2012-05-12 15:25:25 0: cs:WFBitMapT ro:Secondary/Primary ds:Inconsistent/UpToDate C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:5690352828 Log of Primary May 30 13:42:23 primaryds kernel: [ 1584.057076] block drbd0: role( Secondary -> Primary ) disk( Inconsistent -> UpToDate ) May 30 13:42:23 primaryds kernel: [ 1584.086264] block drbd0: Forced to consider local data as UpToDate! May 30 13:42:23 primaryds kernel: [ 1584.086303] block drbd0: Creating new current UUID May 30 13:42:26 primaryds kernel: [ 1586.405551] block drbd0: drbd_sync_handshake: May 30 13:42:26 primaryds kernel: [ 1586.405564] block drbd0: self E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405574] block drbd0: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:26 primaryds kernel: [ 1586.405582] block drbd0: uuid_compare()=2 by rule 30 May 30 13:42:26 primaryds kernel: [ 1586.405587] block drbd0: Becoming sync source due to disk states. May 30 13:42:26 primaryds kernel: [ 1586.405592] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:27 primaryds kernel: [ 1588.171638] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:27 primaryds kernel: [ 1588.172769] block drbd0: conn( Connected -> WFBitMapS ) Log in Secondary May 30 13:42:24 secondaryds kernel: [ 1563.304894] block drbd0: peer( Secondary - Primary ) pdsk( Inconsistent - UpToDate ) May 30 13:42:24 secondaryds kernel: [ 1563.339674] block drbd0: drbd_sync_handshake: May 30 13:42:24 secondaryds kernel: [ 1563.339685] block drbd0: self 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339695] block drbd0: peer E8A075F378173D4B:0000000000000004:0000000000000000:0000000000000000 bits:1422588207 flags:0 May 30 13:42:24 secondaryds kernel: [ 1563.339703] block drbd0: uuid_compare()=-2 by rule 20 May 30 13:42:24 secondaryds kernel: [ 1563.339709] block drbd0: Becoming sync target due to disk states. May 30 13:42:24 secondaryds kernel: [ 1563.339714] block drbd0: Writing the whole bitmap, full sync required after drbd_sync_handshake. May 30 13:42:26 secondaryds kernel: [ 1565.652342] block drbd0: 5427 GB (1422588207 bits) marked out-of-sync by on disk bit-map. May 30 13:42:26 secondaryds kernel: [ 1565.652965] block drbd0: conn( Connected - WFBitMapT ) The serves are not responding once it reaches this stage. Tried redoing it couple of time but noting happens. Why could the resync not be taking place? I would like some advice? Directions?

    Read the article

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • Open-source and client-side IRC-Client

    - by user125197
    I'm administrating a homepage with an IRC-Webclient. At the moment, it uses a Java-client, modified to be SSL-compatible and compatible with whatever design I want to use. The usual PJIRC thing, with users wishes and concerns implemented (I'll give you SSL-PJIRC if you want - no problem. The login-script will be improved for IE6-support this weekend, means write the object tag if an old IE is found, that's it. Rest runs perfectly well). But still, the better user experience would be a client that only requires the user to enable JavaScript. So, before I go into rewriting and customizing a complete Java-IRC-solution, I'd like to ask some other people - after researching for half a year of time. The requirements are: a) Free hosting, no cgi-bin is acceptable. A lot of hosters also have CGI supported, but it doesn't allow IRC-access. So, seems you are limited to x10-hosting for CGI:IRC. b) Solution must be open source (not free as in free beer, but free as in free speech - I want to be able to read every line of the code). c) Hosting cannot be limited to Windows-hosting. Free operation systems (anything with Linux, BSD, whatever - I think you get what I mean) must be possible. d) No server-side technology. I'm limited to everything that runs client-side only. (Well, a Java-Applet does...). d) Solution must support SSL. e) The side must be able to move. Means, you register to a new free hoster, load your things up via FileZilla and the thing simply runs. No server-concerning implementations needed. f) Old browsers must be supported. From all my research, there are two ways a) Java-Applet b) Use HTML5-Websockets (it is impossible to use the JavaScript-library I found, as it depends on ActiveX - means, Windows hosting). b) means "you are going to enter very unstable content". I worked with HTML5 for month, and, well ... Though, HTML5 also is not suitable for older browsers (old browser support is an absolutely necessary requirement). PHP by the way is a server side solution ... so no line of PHP. My idea now was a Java Applet, and a fallback which again is embedded and proprietary, but only requieres JavaScript (wsirc or Mibbit are great here, whilst I prefer wsirc ... and zooming fonts that are plainly to small, but worse, cannot read the code). So my question is - with free hosting, not installation on server, plain upload - do you see and open source, complelety client-side, ssl-compatible way to use or write a client? This wouldn't be my first programming project, I'm not afraid of the code. If there was a way and I had to write it, even from scratch, I'd do. From all I know, there sadly is none. But maybe some experienced admin has an idea? Best Wishes, yetanotheruser Sry my English isn't perfect. It's enough for programming at least.

    Read the article

  • RAID controller dropping the wrong drive

    - by bramp
    I've been having an issue with 3ware 9500S-8 RAID 10, and I have contracted their tech support, but I wanted to hear the serverfault community's recommendations. Firstly, all my data is backuped and secure, so I don't mind blowing my RAID away if I have to. But let me describe the problem I've been seeing. A month ago, disk 6 dropped out of the RAID. It is mirrored with disk 7, so I wasn't that bothered. I went to the data centre and replaced it. When I got back to the office, I noticed that disk 6 will still not in the RAID, and in fact the controller was show the name of the old drive still. A week later I went back and replace the drive again, thinking I might have swapped in a bad drive. Still the same problem. I decided to reboot the machine, to see if that would "force" the controller into seeing the new drive. It did, and a rebuild started to happen (from disk 7). Eventually both drives were showing as good. A week later, the MySQL database has flagged the database is corrupt, and is unable to repair it. I don't know what has gone wrong, but I suspected this 6-7 pair. At this point I noticed that the RAID had constantly been verifying itself, over and over. Regardless of this I began to rebuild the database, which took about 19 hours. It's a big database. Near the end of the repair, the RAID controller told me it had dropped disk 7, and that some data was most likely corrupted. I contacted LSI tech support, and they very promptly started to help me. I mentioned that drive 7 had been dropped. They suspect that drive 7 was always at fault, and drive 6 had always been good. I want to know how often a RAID controller would drop the wrong drive (in this case dropping drive 6 a month ago, instead of 7). I foolishly didn't run smartctl on the drives before I started swapping them out. I just assumed the RAID controller knew what it was talking about. I think my plan of action is to replace drive 7, rebuild the array from scratch, double check smartctl on ALL the disks, and then start restoring my data again. I would appreciate anyone's input on what the correct procedure for swapping drives is, and how often failures like this happen. If anyone would like more information then I'd be happy to provide it. thanks in advance. Oh some more information. I'm running CentOS 5.3, with two RAID arrays, a simple RAID 1 for the OS, and RAID 10 for the database. Both arrays are on different controllers. The RAID 10 is made of 10 identical ST3640323AS drives, until I swapped in a SAMSUNG HD103SJ last month.

    Read the article

  • Cannot open simplest mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: * Is there some kind of validity caching for applications ? based on what ? how do I flush it ? * Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Capistrano + Nginx + Passenger = 403

    - by slimchrisp
    I asked this over at stackoverflow as well, but still haven't received any answers that have helped me to solve this problem. I have spent almost a week at this point trying to solve the issue, and I'm just not making any headway. It seems that this issue is pretty common, but none of the solutions I found online work for me. A buddy of mine is actually creating the same setup, and he is having the same issue. After a few days stuck with the 403 error I started over using this tutorial: http://blog.ninjahideout.com/posts/a-guide-to-a-nginx-passenger-and-rvm-server I had hoped starting from scratch using this tutorial would work, but no dice. Either way, if you view the tutorial you can see what steps I have taken. Here is essentially what I have going on. I have a VPS account on linode.com Server OS is Ubuntu 10.04 Local OS (shouldn't matter, but just so you know) used to deploy with Capistrano is Snow Leopard 10.6.6 I use RVM on the server. Version is 1.2.2 I was previously on ruby-1.9.2-p0 [ i386 ], but per the tutorial listed above I switched to ree-1.8.7-2010.02 [ i386 ]. Running 'which ruby' from the command line verifies that I am using 1.8.7 with the following output: /usr/local/rvm/rubies/ree-1.8.7-2010.02/bin/ruby passenger -v prints the following: Phusion Passenger version 3.0.2 Running 'nginx -v' gives me a message that the command nginx could not be found. The server is definitely there and running as I can use nginx to serve static files, but this could have something to do with my problem. I have two users dealing with the install. root which I used to install everything, and deployer which is a user I created specifically to for deploying my applications My web app directory is in the deployer user's home directory as follows: /home/deployer/webapps/mysite.com/public Per Capistrano default deploy, a symbolic link called current is created in the public folder, and points to /home/deployer/webapps/mysite.com/public/releases/most_current_release I have chmodded the deployer directory recursively to 777 /opt/nginx permissions: rwxr-xr-x /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2 permissions: rwxrwsrwx My nginx config file has gone through just short of eternity variations, but currently looks like this: ================================================================================== worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/local/rvm/gems/ree-1.8.7-2010.02/gems/passenger-3.0.2; passenger_ruby /usr/local/rvm/bin/passenger_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { # listen *:80; server_name mysite.com www.mysite.com; root /home/deployer/webapps/mysite.com/public/current; passenger_enabled on; passenger_friendly_error_pages on; access_log logs/mysite.com/server.log; error_log logs/mysite.com/error.log info; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ================================================================================== I bounce nginx, hit the site, and boom. 403, and logs say directory index of /home/deployer... is forbidden As others with a similar problem have said, you can drop an index.html into the public/releases/current_release and it will render. But rails no worky. That's basically it. At this point I have just about completely exhausted every possible solution attempt I can think of. I am a programmer and definitely not a sysadmin, so I am 99% sure this has something to do with permissions that I have hosed, but for the life of me I just can't figure out where. If anyone can help I would really really appreciate it. If there's any specific permission things you want me to check (ie groups/permissions), can you please include the commands to do so as well. Hopefully this will help others in the future who read this post. Let me know if there is any other information I can provide, and thanks in advance!!!

    Read the article

  • Cannot open simplest script mac application

    - by streetpc
    I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69  | Next Page >