Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 349/371 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • Guilty of unsound programming

    - by TelJanini
    I was reading Robert Rossney's entry on "What's the most unsound program you've had to maintain?" found at: (What's the most unsound program you've had to maintain?) when I realized that I had inadvertently developed a near-identical application! The app consists of an HTTPListener object that grabs incoming POST requests. Based on the information in the header, I pass the body of the request to SQL Server to perform the appropriate transaction. The requests look like: <InvoiceCreate Control="389> <Invoice> <CustomerNumber>5555</CustomerNumber> <Total>300.00</Total> <RushOrder>1</RushOrder> </Invoice> </InvoiceCreate> Once it's received by the HTTPListener object, I perform the required INSERT to the Invoice table using SQL Server's built-in XML handling functionality via a stored procedure: INSERT INTO Invoice (InvoiceNumber, CustomerNumber, Total, RushOrder) SELECT @NEW_INVOICE_NUMBER, @XML.value('(InvoiceCreate/Invoice/CustomerNumber)[1]', 'varchar(10)'), @XML.value('(InvoiceCreate/Invoice/Total)[1]', 'varchar(10)'), @XML.value('(InvoiceCreate/Invoice/Total)[1]', 'varchar(10)') I then use another SELECT statement in the same stored procedure to return the value of the new Invoice Number that was inserted into the Invoices table: SELECT @NEW_INVOICE_NUMBER FOR XML PATH 'InvoiceCreateAck' I then read the generated XML using a SQL data reader object in C# and use it as the response of the HTTPListener object. My issue is, I'm noticing that Robert is indeed correct. All of my application logic exists inside the stored procedure, so I find myself having to do a lot of error-checking (i.e. validating the customer number and invoicenumber values) inside the stored procedure. I'm still a midlevel developer, and as such, am looking to improve. Given the original post, and my current architecture, what could I have done differently to improve the application? Are there any patterns or best practices that I could refer to? What approach would you have taken? I'm open to any and all criticism, as I'd like to do my part to reduce the amount of "unsound programming" in the world.

    Read the article

  • recursion resulting in extra unwanted data

    - by spacerace
    I'm writing a module to handle dice rolling. Given x die of y sides, I'm trying to come up with a list of all potential roll combinations. This code assumes 3 die, each with 3 sides labeled 1, 2, and 3. (I realize I'm using "magic numbers" but this is just an attempt to simplify and get the base code working.) int[] set = { 1, 1, 1 }; list = diceroll.recurse(0,0, list, set); ... public ArrayList<Integer> recurse(int index, int i, ArrayList<Integer> list, int[] set){ if(index < 3){ // System.out.print("\n(looping on "+index+")\n"); for(int k=1;k<=3;k++){ // System.out.print("setting i"+index+" to "+k+" "); set[index] = k; dump(set); recurse(index+1, i, list, set); } } return list; } (dump() is a simple method to just display the contents of list[]. The variable i is not used at the moment.) What I'm attempting to do is increment a list[index] by one, stepping through the entire length of the list and incrementing as I go. This is my "best attempt" code. Here is the output: Bold output is what I'm looking for. I can't figure out how to get rid of the rest. (This is assuming three dice, each with 3 sides. Using recursion so I can scale it up to any x dice with y sides.) [1][1][1] [1][1][1] [1][1][1] [1][1][2] [1][1][3] [1][2][3] [1][2][1] [1][2][2] [1][2][3] [1][3][3] [1][3][1] [1][3][2] [1][3][3] [2][3][3] [2][1][3] [2][1][1] [2][1][2] [2][1][3] [2][2][3] [2][2][1] [2][2][2] [2][2][3] [2][3][3] [2][3][1] [2][3][2] [2][3][3] [3][3][3] [3][1][3] [3][1][1] [3][1][2] [3][1][3] [3][2][3] [3][2][1] [3][2][2] [3][2][3] [3][3][3] [3][3][1] [3][3][2] [3][3][3] I apologize for the formatting, best I could come up with. Any help would be greatly appreciated. (This method was actually stemmed to use the data for something quite trivial, but has turned into a personal challenge. :) edit: If there is another approach to solving this problem I'd be all ears, but I'd also like to solve my current problem and successfully use recursion for something useful.

    Read the article

  • Is it OK to put a standard, pure C header #include directive inside a namespace?

    - by mic_e
    I've got a project with a class log in the global namespace (::log). So, naturally, after #include <cmath>, the compiler gives an error message each time I try to instantiate an object of my log class, because <cmath> pollutes the global namespace with lots of three-letter methods, one of them being the logarithm function log(). So there are three possible solutions, each having their unique ugly side-effects. Move the log class to it's own namespace and always access it with it's fully qualified name. I really want to avoid this because the logger should be as convenient as possible to use. Write a mathwrapper.cpp file which is the only file in the project that includes <cmath>, and makes all the required <cmath> functions available through wrappers in a namespace math. I don't want to use this approach because I have to write a wrapper for every single required math function, and it would add additional call penalty (cancelled out partially by the -flto compiler flag) The solution I'm currently considering: Replace #include <cmath> by namespace math { #include "math.h" } and then calculating the logarithm function via math::log(). I have tried it out and it does, indeed, compile, link and run as expected. It does, however, have multiple downsides: It's (obviously) impossible to use <cmath>, because the <cmath> code accesses the functions by their fully qualified names, and it's deprecated to use in C++. I've got a really, really bad feeling about it, like I'm gonna get attacked and eaten alive by raptors. So my question is: Is there any recommendation/convention/etc that forbid putting include directives in namespaces? Could anything go wrong with diferent C standard library implementations (I use glibc), different compilers (I use g++ 4.7, -std=c++11), linking? Have you ever tried doing this? Are there any alternate ways to banish the math functions from the global namespace? I've found several similar questions on stackoverflow, but most were about including other C++ headers, which obviously is a bad idea, and those that weren't made contradictory statements about linking behaviour for C libraries. Also, would it be beneficial to additionally put the #include <math.h> inside extern "C" {}?

    Read the article

  • A question about making a C# class persistant during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • sqlite 3 opening issue

    - by anonymous
    I'm getting my data ,with several similar methods, from sqlite3 file like in following code: -(NSMutableArray *) getCountersByID:(NSString *) championID{ NSMutableArray *arrayOfCounters; arrayOfCounters = [[NSMutableArray alloc] init]; @try { NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *databasePath = [[[NSBundle mainBundle] resourcePath ]stringByAppendingPathComponent:@"DatabaseCounters.sqlite"]; BOOL success = [fileManager fileExistsAtPath:databasePath]; if (!success) { NSLog(@"cannot connect to Database! at filepath %@",databasePath); } else{ NSLog (@"SUCCESS getCountersByID!!"); } if(sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK){ NSString *tempString = [NSString stringWithFormat:@"SELECT COUNTER_ID FROM COUNTERS WHERE CHAMPION_ID = %@",championID]; const char *sql = [tempString cStringUsingEncoding:NSASCIIStringEncoding]; sqlite3_stmt *sqlStatement; int ret = sqlite3_prepare(database, sql, -1, &sqlStatement, NULL); if (ret != SQLITE_OK) { NSLog(@"Error calling sqlite3_prepare: %d", ret); } if(sqlite3_prepare_v2(database, sql, -1, &sqlStatement, NULL) == SQLITE_OK){ while (sqlite3_step(sqlStatement)==SQLITE_ROW) { counterList *CounterList = [[counterList alloc]init]; CounterList.counterID = [NSString stringWithUTF8String:(char *) sqlite3_column_text(sqlStatement,0)]; [arrayOfCounters addObject:CounterList]; } } else{ NSLog(@"problem with database prepare"); } sqlite3_finalize(sqlStatement); } else{ NSLog(@"problem with database openning %s",sqlite3_errmsg(database)); } } @catch (NSException *exception){ NSLog(@"An exception occured: %@", [exception reason]); } @finally{ sqlite3_close(database); return arrayOfCounters; } //end } then i'm getting access to data with this and other similar lines of code: myCounterList *MyCounterList = [[myCounterList alloc] init]; countersTempArray = [MyCounterList getCountersByID:"2"]; [countersArray addObject:[NSString stringWithFormat:@"%@",(((counterList *) [countersTempArray objectAtIndex:i]).counterID)]]; I'm getting a lot of data like image name and showing combination of them that depends on users input with such code: UIImage *tempImage = [UIImage imageNamed:[NSString stringWithFormat:@"%@_0.jpg",[countersArray objectAtIndex:0]]]; [championSelection setBackgroundImage:tempImage forState:UIControlStateNormal]; My problem: When i'm run my app for some time and get a lot of data it throws error: " problem with database openning unable to open database file - error = 24 (Too many open files)" My guess is that i'm opening my database every time when getCountersByID is called but not closing it. My question: Am i using right approach to open and close database that i use? Similar questions that have not helped me to solve this problem: unable to open database Sqlite Opening Error : Unable to open database UPDATE: I made assumption that error is showing up because i use this lines of code too much: NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *databasePath = [[[NSBundle mainBundle] resourcePath ]stringByAppendingPathComponent:@"DatabaseCounters.sqlite"]; BOOL success = [fileManager fileExistsAtPath:databasePath]; and ending up with error 24. So i made them global but sqlite3_errmsg shows same err 24, but app runs much faster now I'll try to debug my app, see what happens

    Read the article

  • A Security (encryption) Dilemma

    - by TravisPUK
    I have an internal WPF client application that accesses a database. The application is a central resource for a Support team and as such includes Remote Access/Login information for clients. At the moment this database is not available via a web interface etc, but one day is likely to. The remote access information includes the username and passwords for the client's networks so that our client's software applications can be remotely supported by us. I need to store the usernames and passwords in the database and provide the support consultants access to them so that they can login to the client's system and then provide support. Hope this is making sense. So the dilemma is that I don't want to store the usernames and passwords in cleartext on the database to ensure that if the DB was ever compromised, I am not then providing access to our client's networks to whomever gets the database. I have looked at two-way encryption of the passwords, but as they say, two-way is not much different to cleartext as if you can decrypt it, so can an attacker... eventually. The problem here is that I have setup a method to use a salt and a passcode that are stored in the application, I have used a salt that is stored in the db, but all have their weaknesses, ie if the app was reflected it exposes the salts etc. How can I secure the usernames and passwords in my database, and yet still provide the ability for my support consultants to view the information in the application so they can use it to login? This is obviously different to storing user's passwords as these are one way because I don't need to know what they are. But I do need to know what the client's remote access passwords are as we need to enter them in at the time of remoting to them. Anybody have some theories on what would be the best approach here? update The function I am trying to build is for our CRM application that will store the remote access details for the client. The CRM system provides call/issue tracking functionality and during the course of investigating the issue, the support consultant will need to remote in. They will then view the client's remote access details and make the connection

    Read the article

  • Friendly way to parse XDocument

    - by Oli
    I have a class that various different XML schemes are created from. I create the various dynamic XDocuments via one (Very long) statement using conditional operators for optional elements and attributes. I now need to convert the XDocuments back to the class but as they are coming from different schemes many elements and sub elements may be optional. The only way I know of doing this is to use a lot of if statements. This approach doesn't seem very LINQ and uses a great deal more code than when I create the XDocument so I wondered if there is a better way to do this? An example would be to get <?xml version="1.0"?> <root xmlns="somenamespace"> <object attribute1="This is Optional" attribute2="This is required"> <element1>Required</element1> <element1>Optional</element1> <List1> Optional List Of Elements </List1> <List2> Required List Of Elements </List2> </object> </root> Into public class Object() { public string Attribute1; public string Attribute2; public string Element1; public string Element2; public List<ListItem1> List1; public List<ListItem2> List2; } In a more LINQ friendly way than this: public bool ParseXDocument(string xml) { XNamespace xn = "somenamespace"; XDocument document = XDocument.Parse(xml); XElement elementRoot = description.Element(xn + "root"); if (elementRoot != null) { //Get Object Element XElement elementObject = elementRoot.Element(xn + "object"); if(elementObject != null) { if(elementObject.Attribute(xn + "attribute1") != null) { Attribute1 = elementObject.Attribute(xn + "attribute1"); } if(elementObject.Attribute(xn + "attribute2") != null) { Attribute2 = elementObject.Attribute(xn + "attribute2"); } else { //This is a required Attribute so return false return false; } //If, If/Elses get deeper and deeper for the next elements and lists etc.... } else { //Object is a required element so return false return false; } } else { //Root is a required element so return false return false; } return true; } Update: Just to clarify the ParseXDocument method is inside the "Object" class. Every time an xml document is received the Object class instance has some or all of it's values updated.

    Read the article

  • best alternative to in-definition initialization of static class members? (for SVN keywords)

    - by Jeff
    I'm storing expanded SVN keyword literals for .cpp files in 'static char const *const' class members and want to store the .h descriptions as similarly as possible. In short, I need to guarantee single instantiation of a static member (presumably in a .cpp file) to an auto-generated non-integer literal living in a potentially shared .h file. Unfortunately the language makes no attempt to resolve multiple instantiations resulting from assignments made outside class definitions and explicitly forbids non-integer inits inside class definitions. My best attempt (using static-wrapping internal classes) is not too dirty, but I'd really like to do better. Does anyone have a way to template the wrapper below or have an altogether superior approach? // Foo.h: class with .h/.cpp SVN info stored and logged statically class Foo { static Logger const verLog; struct hInfoWrap; public: static hInfoWrap const hInfo; static char const *const cInfo; }; // Would like to eliminate this per-class boilerplate. struct Foo::hInfoWrap { hInfoWrapper() : text("$Id$") { } char const *const text; }; ... // Foo.cpp: static inits called here Foo::hInfoWrap const Foo::hInfo; char const *const Foo::cInfo = "$Id$"; Logger const Foo::verLog(Foo::cInfo, Foo::hInfo.text); ... // Helper.h: output on construction, with no subsequent activity or stored fields class Logger { Logger(char const *info1, char const *info2) { cout << info0 << endl << info1 << endl; } }; Is there a way to get around the static linkage address issue for templating the hInfoWrap class on string literals? Extern char pointers assigned outside class definitions are linguistically valid but fail in essentially the same manner as direct member initializations. I get why the language shirks the whole resolution issue, but it'd be very convenient if an inverted extern member qualifier were provided, where the definition code was visible in class definitions to any caller but only actually invoked at the point of a single special declaration elsewhere. Anyway, I digress. What's the best solution for the language we've got, template or otherwise? Thanks!

    Read the article

  • Terminating a long-executing thread and then starting a new one in response to user changing parameters via UI in an applet

    - by user1817170
    I have an applet which creates music using the JFugue API and plays it for the user. It allows the user to input a music phrase which the piece will be based on, or lets them choose to have a phrase generated randomly. I had been using the following method (successfully) to simply stop and start the music, which runs in a thread using the Player class from JFugue. I generate the music using my classes and user input from the applet GUI...then... private playerThread pthread; private Thread threadPlyr; private Player player; (from variables declaration) public void startMusic(Pattern p) // pattern is a JFugue object which holds the generated music { if (pthread == null) { pthread = new playerThread(); } else { pthread = null; pthread = new playerThread(); } if (threadPlyr == null) { threadPlyr = new Thread(pthread); } else { threadPlyr = null; threadPlyr = new Thread(pthread); } pthread.setPattern(p); threadPlyr.start(); } class playerThread implements Runnable // plays midi using jfugue Player { private Pattern pt; public void setPattern(Pattern p) { pt = p; } @Override public void run() { try { player.play(pt); // takes a couple mins or more to execute resetGUI(); } catch (Exception exception) { } } } And the following to stop music when user presses the stop/start button while Player.isPlaying() is true: public void stopMusic() { threadPlyr.interrupt(); threadPlyr = null; pthread = null; player.stop(); } Now I want to implement a feature which will allow the user to change parameters while the music is playing, create an updated music pattern, and then play THAT pattern. Basically, the idea is to make it simulate "real time" adjustments to the generated music for the user. Well, I have been beating my head against the wall on this for a couple of weeks. I've read all the standard java documentation, researched, read, and searched forums, and I have tried many different ideas, none of which have succeeded. The problem I've run into with all approaches I've tried is that when I start the new thread with the new, updated musical pattern, all the old threads ALSO start, and there is a cacophony of unintelligible noise instead of my desired output. From what I've gathered, the issue seems to be that all the methods I've come across require that the thread is able to periodically check the value of a "flag" variable and then shut itself down from within its "run" block in response to that variable. However, since my thread makes a call that takes several minutes minimum to execute (playing the music), and I need to terminate it WHILE it is executing this, there is really no safe way to do so. So, I'm wondering if there is something I'm missing when it comes to threads, or if perhaps I can accomplish my goal using a totally different approach. Any ideas or guidance is greatly appreciated! Thank you!

    Read the article

  • Why fill() and copy() of Collections in java is implemented this way

    - by Priyank Doshi
    According to javadoc... Collections.fill() is written as below : public static <T> void fill(List<? super T> list, T obj) { int size = list.size(); if (size < FILL_THRESHOLD || list instanceof RandomAccess) { for (int i=0; i<size; i++) list.set(i, obj); } else { ListIterator<? super T> itr = list.listIterator(); for (int i=0; i<size; i++) { itr.next(); itr.set(obj); } } } Its easy to understand why they didn't use listIterator for if (size < FILL_THRESHOLD || list instanceof RandomAccess) condition as of RandomAccess. But whats the use of size < FILL_THRESHOLD in above? I mean is there any significant performance benefit over using iterator for size>=FILL_THRESHOLD and not for size < FILL_THRESHOLD ? I see the same approach for Collections.copy() also : public static <T> void copy(List<? super T> dest, List<? extends T> src) { int srcSize = src.size(); if (srcSize > dest.size()) throw new IndexOutOfBoundsException("Source does not fit in dest"); if (srcSize < COPY_THRESHOLD || (src instanceof RandomAccess && dest instanceof RandomAccess)) { for (int i=0; i<srcSize; i++) dest.set(i, src.get(i)); } else { ListIterator<? super T> di=dest.listIterator(); ListIterator<? extends T> si=src.listIterator(); for (int i=0; i<srcSize; i++) { di.next(); di.set(si.next()); } } } FYI: private static final int FILL_THRESHOLD = 25; private static final int COPY_THRESHOLD = 10;

    Read the article

  • progress at work

    - by noopize
    I work in a small department in a very large company. Our department operates largely as a independent unit within the company. Each member of the team has a different role. My role within the team is a operations/admin and no one knew of my skills in programing as I never said anything before about it. I just did my work and in the free time read up on things for my own development Our developer who used to look after our websites has left a few months ago. Now when we require edits to our websites even basic HTML changes we outsource the work. We are getting shafted big time. I could of so said something sooner to highlight my skills in this area but I guess I was just happy to do my own development projects. And one reason was they are using asp.net and I have mainly done things in php. I only hinted before that I have done things but I did not want to reveal them before I had completed anything. I was working on something for myself that the company was also trying to implement something similar(e commerce site). I used open source and they decided to go for a propriety solution. Now I have finished my project and showed it to my boss, their project is still not completed and is quite expensive. He was impressed with what I showed him and suggested I should go for courses to learn asp.net. that I may be able to do the development work for them and there are some big upcoming projects in the future. He said this would be a benefit for me that I should look to be doing a better then role then admin. My employer does have a policy if relevent to the role they may support the costs of courses. Now how do I play this what should I say to my boss. I want to get advise on which MS certified courses would be good for asp.net and how to best approach my boss to see if they will pay all the amount for the course. And how much different will asp.net be from php.

    Read the article

  • allignment issue of div tag

    - by Quasar the space thing
    I am trying to create a web page where on click of a button I can add div tags. What I thought to do was that I'll create two div tags within a single div so that over all presentation will be uniform and similar to a table having two columns and multiple rows and the first column contains only label's and second column will contain textbox. Here is the JS file : var counter = 0; function create_div(type){ var dynDiv = document.createElement("div"); dynDiv.id = "divid_"+counter; dynDiv.class="main"; document.body.appendChild(dynDiv); question(); if(type == 'ADDTEXTBOX'){ ADDTEXTBOX(); } counter=counter+1; } function question(){ var question_div = document.createElement("div"); question_div.class="question"; question_div.id = "question_div_"+counter; var Question = prompt("Enter The Question here:", ""); var node=document.createTextNode(Question); question_div.appendChild(node); var element=document.getElementById("divid_"+counter); element.appendChild(question_div); } function ADDTEXTBOX(){ var answer_div = document.createElement("div"); answer_div.class="answer"; answer_div.id = "answer_div_"+counter; var answer_tag = document.createElement("input"); answer_tag.id = "answer_tag_"+counter; answer_tag.setAttribute("type", "text"); answer_tag.setAttribute("name", "textbox"); answer_div.appendChild(answer_tag); var element=document.getElementById("divid_"+counter); element.appendChild(answer_div); } Here is the css file : .question { width: 40%; height: auto; float: left; display: inline-block; text-align: justify; word-wrap:break-word; } .answer { padding-left:10%; width: 40%; height: auto; float: left; overflow: auto; word-wrap:break-word; } .main { width: auto; background-color:gray; height: auto; overflow: auto; word-wrap:break-word; } My problem is that the code is working properly but both the divisions are not coming in a straight line. after the first div prints on the screen the second divisions comes in another line. How can I make both the div's come in the same line ? Thank You. PS : should I stick with the current idea of using div or should I try some other approach ? like tables ?

    Read the article

  • style a navigation link when a particular div is shown

    - by Matt Meadows
    I have JQuery working to show a particular div when a certain link is clicked. I have managed to apply the effect I'm after with the main navigation bar through id'ing the body tag and using css to style when the id is found. However, i'd like to apply the same effect to the sub navigation when a certain div is present. How the main navigation is styled: HTML: <nav> <ul> <li id="nav-home"><a href="index.html">Home</a></li> <li id="nav-showreel"><a href="showreel.html">Showreel</a></li> <li id="nav-portfolio"><a href="portfolio.html">Portfolio</a></li> <li>Contact</li> </ul> </nav> CSS: body#home li#nav-home, body#portfolio li#nav-portfolio { background: url("Images/Nav_Underline.png") no-repeat; background-position: center bottom; color: white; } (Other links havent been added to styling as those pages are still in development) How the sub navigation is structured: <nav id="portfolioNav"> <ul> <li id="portfolio-compositing"><a id="compositingWork" href="#">Compositing</a></li> <li id="portfolio-animation"><a id="animationWork" href="#">Animation</a></li> <li id="portfolio-motionGfx"><a id="GFXWork" href="#">Motion Graphics</a></li> <li id="portfolio-3D"><a id="3DWork" href="#">3D</a></li> </ul> </nav> As you can see, its similar format to the main navigation, however i've tried the same approach and it doesn't work :( The Javascript that switches the divs on the navigation click: <script type="text/javascript"> $(document).ready(function() { $('#3DWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #Portfolio3D'); }); $('#GFXWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #motionGraphics'); }); $('#compositingWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioCompositing'); }); $('#animationWork').click(function(){ $('#portfolioWork').load('portfolioContent.html #PortfolioAnimation'); }); }); </script> JSFiddle for full HTML & CSS : JSFiddle File The effect I'm After:

    Read the article

  • How to see if type is instance of a class in Haskell?

    - by Raekye
    I'm probably doing this completely wrong (the unhaskell way); I'm just learning so please let me know if there's a better way to approach this. Context: I'm writing a bunch of tree structures. I want to reuse my prettyprint function for binary trees. Not all trees can use the generic Node/Branch data type though; different trees need different extra data. So to reuse the prettyprint function I thought of creating a class different trees would be instances of: class GenericBinaryTree a where is_leaf :: a -> Bool left :: a -> a node :: a -> b right :: a -> a This way they only have to implement methods to retrieve the left, right, and current node value, and prettyprint doesn't need to know about the internal structure. Then I get down to here: prettyprint_helper :: GenericBinaryTree a => a -> [String] prettyprint_helper tree | is_leaf tree = [] | otherwise = ("{" ++ (show (node tree)) ++ "}") : (prettyprint_subtree (left tree) (right tree)) where prettyprint_subtree left right = ((pad "+- " "| ") (prettyprint_helper right)) ++ ((pad "`- " " ") (prettyprint_helper left)) pad first rest = zipWith (++) (first : repeat rest) And I get the Ambiguous type variable 'a0' in the constraint: (Show a0) arising from a use of 'show' error for (show (node tree)) Here's an example of the most basic tree data type and instance definition (my other trees have other fields but they're irrelevant to the generic prettyprint function) data Tree a = Branch (Tree a) a (Tree a) | Leaf instance GenericBinaryTree (Tree a) where is_leaf Leaf = True is_leaf _ = False left (Branch left node right) = left right (Branch left node right) = right node (Branch left node right) = node I could have defined node :: a -> [String] and deal with the stringification in each instance/type of tree, but this feels neater. In terms of prettyprint, I only need a string representation, but if I add other generic binary tree functions later I may want the actual values. So how can I write this to work whether the node value is an instance of Show or not? Or what other way should I be approaching this problem? In an object oriented language I could easily check whether a class implements something, or if an object has a method. I can't use something like prettyprint :: Show a => a -> String Because it's not the tree that needs to be showable, it's the value inside the tree (returned by function node) that needs to be showable. I also tried changing node to Show b => a -> b without luck (and a bunch of other type class/preconditions/whatever/I don't even know what I'm doing anymore).

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Excel: conditionally format a cell using the format of another, content-matching cell

    - by Eric A. Meyer
    I have an Excel spreadsheet where I’d like to be able to create a “key” of formatted cells with unique values, and then in another sheet format cells using the key formatting. So for example, my key is as follows, with one value per cell and the visual formatting indicated in parentheses: A (red background) B (green background) C (blue background) So that’s on one sheet (or in a remote corner of the current sheet—whichever is better). Then, in an area that I mark for conditional formatting, I can type one of those three letters and have the cell where I typed it visually formatted according to the key. So if I type a “B” into one of the conditionally formatted cells, it gets a green background. (Note that I’m using backgrounds here solely for ease of explanation: ideally I want to have all visual formatting copied over, whether it’s foreground color, background color, font weight, borders, or whatever. But I’ll take what I can get, obviously.) And—just to make it extra-tricky—if I change the formatting in the key, that change should be reflected in cells that reference the key. Thus, if I change the “B” formatting in the key from a green background to a purple background, any “B” in the main sheet should switch to the new color. Similarly, it should be possible to add or remove values from the key and have those changes applied to the main data set. I’m okay with the formatting-update-on-key-change being triggered by clicking a button or something. I suspect that if any of this is possible it will require VBA, but I’ve never used it so I’ve no idea where to start if that’s the case. I’m hoping it’s possible without VBA. I know it’s possible to just use multiple conditional formats, but my use case here is that I’m trying to create the above-described capability for someone who isn’t conversant with conditional formatting. I’d like to let them be able to define a key, update it if necessary, and keep on truckin’ without me having to rewrite the spreadsheet’s formatting rules for them. --- UPDATE --- So I think I was a bit unclear about my original request. Let me try again with an image. The image shows the “key” on the left, where values and styles are defined using keyboard and mouse input. On the right, you see the data that should be formatted to match the key. Thus if I type a “C” into a cell in the Data area, it should be blue-backed. Furthermore, if I change the formatting of “C” in the Key to have a purple background, all the “C” cells should switch from blue to purple. For further craziness, if I add more to the Key (say, “D” with a yellow background) then any “D” cells will be styled to match; if I remove a Key entry, then matching values in the Data area should revert to default styling. So. Is that more clear? Is it possible, in whole or in part? I don’t have to use conditional formatting for this; in fact, at this point I suspect I probably shouldn’t. But I’m open to any approach!

    Read the article

  • simple and reliable centralized logging inside Amazon VPC

    - by Nakedible
    I need to set up centralized logging for a set of servers (10-20) in an Amazon VPC. The logging should be as to not lose any log messages in case any single server goes offline - or in the case that an entire availability zone goes offline. It should also tolerate packet loss and other normal network conditions without losing or duplicating messages. It should store the messages durably, at the minimum on two different EBS volumes in two availability zones, but S3 is a good place as well. It should also be realtime so that the messages arrive within seconds of their generation to two different availability zones. I also need to sync logfiles not generated via syslog, so a syslog-only centralized logging solution would not fulfill all the needs, although I guess that limitation could be worked around. I have already reviewed a few solutions, and I will list them here: Flume to Flume to S3: I could set up two logservers as Flume hosts which would store log messages either locally or in S3, and configure all the servers with Flume to send all messages to both servers, using the end-to-end reliability options. That way the loss of a single server shouldn't cause lost messages and all messages would arrive in two availability zones in realtime. However, there would need to be some way to join the logs of the two servers, deduplicating all the messages delivered to both. This could be done by adding a unique id on the sending side to each message and then write some manual deduplication runs on the logfiles. I haven't found an easy solution to the duplication problem. Logstash to Logstash to ElasticSearch: I could install Logstash on the servers and have them deliver to a central server via AMQP, with the durability options turned on. However, for this to work I would need to use some of the clustering capable AMQP implementations, or fan out the deliver just as in the Flume case. AMQP seems to be a yet another moving part with several implementations and no real guidance on what works best this sort of setup. And I'm not entirely convinced that I could get actual end-to-end durability from logstash to elasticsearch, assuming crashing servers in between. The fan-out solutions run in to the deduplication problem again. The best solution that would seem to handle all the cases, would be Beetle, which seems to provide high availability and deduplication via a redis store. However, I haven't seen any guidance on how to set this up with Logstash and Redis is one more moving part again for something that shouldn't be terribly difficult. Logstash to ElasticSearch: I could run Logstash on all the servers, have all the filtering and processing rules in the servers themselves and just have them log directly to a removet ElasticSearch server. I think this should bring me reliable logging and I can use the ElasticSearch clustering features to share the database transparently. However, I am not sure if the setup actually survives Logstash restarts and intermittent network problems without duplicating messages in a failover case or similar. But this approach sounds pretty promising. rsync: I could just rsync all the relevant log files to two different servers. The reliability aspect should be perfect here, as the files should be identical to the source files after a sync is done. However, doing an rsync several times per second doesn't sound fun. Also, I need the logs to be untamperable after they have been sent, so the rsyncs would need to be in append-only mode. And log rotations mess things up unless I'm careful. rsyslog with RELP: I could set up rsyslog to send messages to two remote hosts via RELP and have a local queue to store the messages. There is the deduplication problem again, and RELP itself might also duplicate some messages. However, this would only handle the things that log via syslog. None of these solutions seem terribly good, and they have many unknowns still, so I am asking for more information here from people who have set up centralized reliable logging as to what are the best tools to achieve that goal.

    Read the article

  • RHCS: GFS2 in A/A cluster with common storage. Configuring GFS with rgmanager

    - by Pavel A
    I'm configuring a two node A/A cluster with a common storage attached via iSCSI, which uses GFS2 on top of clustered LVM. So far I have prepared a simple configuration, but am not sure which is the right way to configure gfs resource. Here is the rm section of /etc/cluster/cluster.conf: <rm> <failoverdomains> <failoverdomain name="node1" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n1"/> </failoverdomain> <failoverdomain name="node2" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="rhc-n2"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/clvm" name="clvmd"/> <clusterfs name="gfs" fstype="gfs2" mountpoint="/mnt/gfs" device="/dev/vg-cs/lv-gfs"/> </resources> <service name="shared-storage-inst1" autostart="0" domain="node1" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> <service name="shared-storage-inst2" autostart="0" domain="node2" exclusive="0" recovery="restart"> <script ref="clvmd"> <clusterfs ref="gfs"/> </script> </service> </rm> This is what I mean: when using clusterfs resource agent to handle GFS partition, it is not unmounted by default (unless force_unmount option is given). This way when I issue clusvcadm -s shared-storage-inst1 clvm is stopped, but GFS is not unmounted, so a node cannot alter LVM structure on shared storage anymore, but can still access data. And even though a node can do it quite safely (dlm is still running), this seems to be rather inappropriate to me, since clustat reports that the service on a particular node is stopped. Moreover if I later try to stop cman on that node, it will find a dlm locking, produced by GFS, and fail to stop. I could have simply added force_unmount="1", but I would like to know what is the reason behind the default behavior. Why is it not unmounted? Most of the examples out there silently use force_unmount="0", some don't, but none of them give any clue on how the decision was made. Apart from that I have found sample configurations, where people manage GFS partitions with gfs2 init script - https://alteeve.ca/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Defining_The_Resources or even as simply as just enabling services such as clvm and gfs2 to start automatically at boot (http://pbraun.nethence.com/doc/filesystems/gfs2.html), like: chkconfig gfs2 on If I understand the latest approach correctly, such cluster only controls whether nodes are still alive and can fence errant ones, but such cluster has no control over the status of its resources. I have some experience with Pacemaker and I'm used to that all resources are controlled by a cluster and an action can be taken when not only there are connectivity issues, but any of the resources misbehave. So, which is the right way for me to go: leave GFS partition mounted (any reasons to do so?) set force_unmount="1". Won't this break anything? Why this is not the default? use script resource <script file="/etc/init.d/gfs2" name="gfs"/> to manage GFS partition. start it at boot and don't include in cluster.conf (any reasons to do so?) This may be a sort of question that cannot be answered unambiguously, so it would be also of much value for me if you shared your experience or expressed your thoughts on the issue. How does for example /etc/cluster/cluster.conf look like when configuring gfs with Conga or ccs (they are not available to me since for now I have to use Ubuntu for the cluster)? Thanks you very much!

    Read the article

  • Thoughts on home NAS server

    - by user826955
    I currently have a NAS with a 2x2TB HDD 1x16GB SSD layout on a mini-itx atom board. The NAS is in a Lian Li PC-Q07 case. On this system I was running freebsd 8 with a gmirror raid 1 setup, which was enough for my needs. So far I was using the NAS for: Fileserver with AFP protocol (only mac clients used) SVN server hosting all my source trees of my projects JIRA (performance was okay-ish) Timemachine backup for the macs The power consumption was about 38W, although I did not put HDDs asleep when unused (I think this is not possible in a raid setup). I liked the NAS because: the performance was good through gigabit LAN (enough for my needs) power consumption was good its a pretty small case and fits in one of my cupboards I disliked the NAS a bit because: it was a bit noisy, the Q07 case vibrated a good amount because of the HDDs. I switched the NAS off every evening I do not have a real backup of the data on the NAS, only the internal raid 1 as safety. I really dont want to loose my source trees under no circumstances, so I would really be sleeping better if I knew I had regular backups somewhere. Recently, the board seemed to have died, I can't boot anymore. Thus, I was thinking about a redesign of my NAS (I still have to find out what parts are broken, I probably need to replace the mainboard and SSD. HDDs seem to be okay). First of all, I was wondering what other users have as backup for their NAS? Are you actually using a second NAS, and regularly copying over the data to have it safe? Or is there any better solution to this? I was thinking about getting a cheap NAS like the synology DS112j with only one disk, and use rsync or something similar to regularly copy data over to the second NAS (wake the second NAS upon start, shut it down after copy). Although this approach seems somewhat weird, It would have the benefit (?) that I could use a single disk instead of raid in the main NAS, and put the disk asleep when idle, and have the NAS running 24/7 with low energy consumption (I found no way to do this with a gmirror setup). Is there any recommended backup solution for a small NAS? Then I was thinking about a different raid setup. Since I have to buy a new mainboard as well as SSD, I might as well switch over to a i3 board with more ram, and also switch to ZFS. I am not familar with ZFS, I've never used it, but I read and hear much about it. Would it be viable to set up a ZFS storage with only 2 disks? Can I easily extend this storage with more disks, once I choose to add some? I could maybe get a new case like the Fractal Design Array R2 which has more 3,5" slots. I could as well get another 2 disks, but I would prefer sticking with the existing 2 for enegery/heat/noise reasons. Should I go for a ZFS storage or stick to my gmirror setup? I would also like to keep freebsd as operating system, and also I dont need any web gui or something (that is, I dont need/want to use FreeNAS or Openfiler etc). Does anyone maybe have a sample setup in use so I can compare energy consumption/noise/software setup? Any guidance towards the NAS of my dreams (silent, low energy, safe w/ backups) much appreciated.

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >