Search Results

Search found 285 results on 12 pages for 'yi tang uni'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • MFC/CCriticalSection: Simple lock situation hangs

    - by raph.amiard
    I have to program a simple threaded program with MFC/C++ for a uni assignment. I have a simple scenario in wich i have a worked thread which executes a function along the lines of : UINT createSchedules(LPVOID param) { genProgThreadVal* v = (genProgThreadVal*) param; // v->searcherLock is of type CcriticalSection* while(1) { if(v->searcherLock->Lock()) { //do the stuff, access shared object , exit clause etc.. v->searcherLock->Unlock(); } } PostMessage(v->hwnd, WM_USER_THREAD_FINISHED , 0,0); delete v; return 0; } In my main UI class, i have a CListControl that i want to be able to access the shared object (of type std::List). Hence the locking stuff. So this CList has an handler function looking like this : void Ccreationprogramme::OnLvnItemchangedList5(NMHDR *pNMHDR, LRESULT *pResult) { LPNMLISTVIEW pNMLV = reinterpret_cast<LPNMLISTVIEW>(pNMHDR); if((pNMLV->uChanged & LVIF_STATE) && (pNMLV->uNewState & LVNI_SELECTED)) { searcherLock.Lock(); // do the stuff on shared object searcherLock.Unlock(); // do some more stuff } *pResult = 0; } The searcherLock in both function is the same object. The worker thread function is passed a pointer to the CCriticalSection object, which is a member of my dialog class. Everything works but, as soon as i do click on my list, and so triggers the handler function, the whole program hangs indefinitely.I tried using a Cmutex. I tried using a CSingleLock wrapping over the critical section object, and none of this has worked. What am i missing ?

    Read the article

  • Removing the port number from URL

    - by DrewSSP
    I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration. gunicorn_config.py command = '/opt/myenv/bin/gunicorn' pythonpath = '/opt/myenv/top-chart-app/' bind = '162.243.76.202:8001' workers = 3 root@django-app:~# nginx config server { server_name 162.243.76.202; access_log off; location /static/ { alias /opt/myenv/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } supervisor config [program:top_chart_gunicorn] command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi autostart=true autorestart=true stderr_logfile=/var/log/supervisor_gunicorn.err.log stdout_logfile=/var/log/supervisor_gunicorn.out.log Thanks for taking a look.

    Read the article

  • Fastest way to learn Flex and Java EE?

    - by LostWebNewbie
    Ok so me and my 2 friends have to make a webapp and well we think it's a good opportunity to learn JEE and Flex. The thing is we have very little knowledge about them and we have only 3 months to do it (it doesn't have to be super complicated). So my question is: what, in your opinion, would be the fastest way to learn them both? I guess we need to know some JSP, Serlvets, JPA (??), Flex, maybe JavaScript+CSS? Anything else like EJB? Should we also learn Spring (or Struts)? Obviously reading books would be a good idea, but I bet we won't make the deadline if we try to read all the books... @Edit: I know the basics of JSP/Servlets (read Head First JSP&Servlets) but I made only 1 project so far (a semi-decent hangman with JSP/Serlvets and JPA for persistance) that's about it. Flex - I'm just starting, I know really the basics of mxml and as3. As for why: 1) because we need to do a project for the uni and well I was thinking bout becoming a web dev (yup - jee+flex) after graduation, this is the perfect opportunity to learn them.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Ich bin jetzt Oracle Certified Associate!

    - by britta.wolf
    Jan Peuker, Absolvent der Hochschule Augsburg und University of Melbourne, hat vor kurzem das Zertifikat Oracle Database 10g Administrator Certified Associate erworben. Er hat uns netterweise mit diesem kleinen Text versorgt: "Die Oracle Zertifizierung beginnt üblicherweise mit dem Oracle Certified Associate. Für diese Zertifizierung ist noch keine tiefgehende Praxiserfahrung notwendig. Um den Titel des Oracle Database 11g Administrator Certified Associate zu erlangen, muss man eine Prüfung zu SQL (z.B. 1Z0-051) sowie eine Prüfung zur Administration (1Z0-045) ablegen. Beide Prüfungen dauern 2 Stunden und haben ca. 80 Fragen von denen etwa drei Viertel richtig beantwortet werden müssen, um zu bestehen. Eine Note gibt es nicht. Die Prüfungen finden immer elektronisch statt, die Software erlaubt das Überspringen und Markieren von Fragen. Während meiner Arbeitszeit nach meinem ersten Studium hatte ich häufig mit dem Oracle Datenbanksystem zu tun. Als ich mein Aufbaustudium an der University of Melbourne absolvierte, wurde mir von der Studienberaterin vorgeschlagen, den Kurs „Advanced Database Administration" zu belegen. Dieser beruht vollständig auf den offiziellen Oracle Trainings-Unterlagen zur Prüfung in Oracle Administration und erlaubt daher die Teilnahme an der offiziellen Zertifizierung. Im Gegensatz zur SQL Prüfung, deren Inhalt man sich gut selbst aneignen kann, hilft bei der Administrator-Zertifizierung ein echter Kurs mit Seminar ungemein. Viele Konzepte lassen sich schwer aus einem Buch lernen. Die Bestandteile der SGA oder das Anlegen von Benutzern mögen leicht zugänglich sein, Redo- und Undo-Management sowie Backup und Recovery kann man nur verstehen, wenn man Beispiele hat und diese an einem Testsystem (keine "kleine" XE-Datenbank, sondern eine "richtige" Datenbank mit Enterprise Manager) ausprobieren kann. Übermäßig viel Zeit habe ich keinesfalls investiert, weil das Grundsystem sehr logisch ist. Für die weniger nachvollziehbaren Bereiche, besonders die neuen Features, habe ich mir Fachbegriffe auf Lernkarten geschrieben und die Trainingsunterlagen am System durchgespielt. Die Prüfung war für mich überraschend schwer, weil das einfache "Tagesgeschäft" deutlich unterrepräsentiert ist. In den Multiple-Choice-Fragen werden viele Besonderheiten und Use-Cases abgefragt (online findet man viele Beispielfragen). Da beide Tests in Englisch sind, sollte man nicht nur in der Terminologie des Oracle Datenbanksystems sondern auch in Fachbegriffen der Datenbankwelt allgemein bewandert sein. Oft machen einzelne Wörter (z.B. redundant oder synchronized, redo log oder redo log buffer) die richtige Antwort aus, ein signifikanter Anteil der Fragen beruht auf Zeichnungen oder Diagrammen, die beschrieben werden müssen. So muss man z.B. anhand eines Log-Auszugs beurteilen, warum die Datenbank nicht sauber geschlossen wurde. Allgemeines Wissen über Datenbanksysteme hilft leider nicht viel, da überproportional viele Fragen zu Oracle-spezifischen Themen gestellt werden, wie z.B. Optimierungs-Dienste (ADDM), Flashback, SQL Loader und ein wenig PL/SQL. Die SQL Prüfung ist dagegen sehr geradlinig - was aber nicht einfacher heißt. Hier kommt es mehr auf Auswendiglernen von Syntax an, was mir persönlich nicht liegt. Vor allem als Anwendungsprogrammierer kennt man oft proprietäre SQL-Funktionen nicht, es fällt schwer, sich einzelne Datumsberechnungsfunktionen, Typkonvertierungen, Namespaces oder krude Join-Methoden zu merken. Auf all dies wird in der Prüfung aber sehr viel Wert gelegt. Auch hier wird man wieder mit zweideutigen Multiple-Choice Fragen konfrontiert, bei denen sich z.B. nur die Reihenfolge der Parameter unterscheidet. Zudem sind die Parameter auch nicht ausgeschrieben, sondern in einem Entity-Relationship-Diagramm gegeben, wobei man auf die richtigen Datentypen achten muss. Mir persönlich war die Zeit fast zu knapp bemessen, weil man bei vielen Fragen erst ein Diagramm, einen Datenauszug oder einen längeren Text lesen muss, um dann die richtigen Statements zu finden. Hier helfen Lernkarten also nur bedingt - stattdessen üben, üben, üben. Durch den relativ niedrigen Pass-Score von 70% kann man es sich leisten, unsichere Fragen zuerst zu überspringen und erst nachdem alle sicheren beantwortet sind, zu überdenken. Die Prüfung ist auf jeden Fall fair. Ich habe durch das Oracle-Zertifizierungsprogramm viel gelernt. Die Datenbanken unter meiner Aufsicht laufen deutlich performanter und liefern höhere Verfügbarkeit, weil ich Probleme eliminieren konnte, die mir vorher nicht klar waren. Eine klassische Misskonfiguration, volle Archive Logs, weil diese mit zu lange gehaltenem Flashback-Speicher kollidieren, konnte ich bereits in einer der ersten Stunden meines Kurses an der Uni Melbourne mit Hilfe meines Professors klären. Beide Prüfungen waren problemlos parallel zu anderen Prüfungen zu absolvieren. Empfehlen kann ich eine gründliche Online-Recherche aber auch die Oracle Press-Bücher, welche mit Prüfungsfragen am Ende jedes Kapitels aufwarten. So spart man sich Zeit und ist trotzdem gut vorbereitet. Auch wenn ich keine Laufbahn als Administrator einschlagen werde, bin ich froh die zugrundeliegende Technologie vieler Anwendungen besser zu verstehen. Für meine tägliche Arbeit als Anwendungsentwickler hat es mir vor allem geholfen, Oracle-Konzepte z.B. im Bereich der Transaktionssteuerung und Wiederherstellung zu verstehen und damit viele Open Source Produkte jetzt sinnvoller bewerten und empfehlen zu können." Eine Übersicht der Zertifizierungspfade finden Sie auf der Oracle University Webseite (dann einfach "Deutschland""auswählen und anschließend auf den Punkt "Zertifizierungen" klicken).

    Read the article

  • Einladung zur Oracle SE University am 13./14. Dezember 2011

    - by A&C Redaktion
    Sehr geehrte Oracle Partner, die erste Oracle SE University wird von Azlan und Oracle gemeinsam ins Leben gerufen, dazu laden wir Sie herzlich ein. Zielgruppe sind die technischen Ansprechpartner aller Oracle Partner. Wir bieten Ihnen in Fulda einen umfassenden Überblick über aktuelle Technologien, Produkte und Dienstleistungen mit den Schwerpunkten Oracle on Oracle, Positionierung und Architektur. Dabei werden sowohl bewährte Software-Produktbereiche wie Datenbank und Fusion Middleware beleuchtet, als auch klassische Hardware-Themen wie Systems, Storage und Virtualisierung. Die Agenda finden Sie hier. Top-Referenten garantieren Ihnen qualitativ hochwertige und technisch anspruchsvolle Vorträge.  Projektberichte aus der Praxis bringen Ihnen Kernthemen näher, um mit diesem Wissen zusätzlichen Umsatz zu generieren. Nutzen Sie darüber hinaus die Möglichkeiten zum Networking, die im täglichen Geschäft Gold wert sind.Logistische Informationen: Termin: 13. - 14. Dezember 2011 Ort: Fulda - bietet als die Barockstadt Deutschlands einen reizvollen Kontrast zu unseren technischen Themen. Aber vor allem liegt Fulda fast genau in der geographischen Mitte Deutschlands und ist via einem modernen ICE-Bahnhof bestens von ganz Deutschland aus zu erreichen. Hotel Esperanto Kongress- und Kulturzentrum Fulda: Vom Bahnhof aus gehen Sie gemütliche zwei Minuten um die Ecke und kommen direkt ins Hotel. Für PKW Fahrer sind ausreichend Parkplätze direkt am Hotel vorhanden. Zielgruppe: SEs, technischer Vertrieb, technische Consultants Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} Konzept der Oracle SE University: Plenum Sessions und Keynote mit strategischen Übersichtsthemen Break Out Sessions (4 Sessions parallel) mit technischem Tiefgang Technologie, Projekterfahrungen, Architekturen, Lösungsszenarien u.v.m. Networking: bei einem gemeinsamen Abendessen am 13.12., ab 19.00 Uhr. Im hauseigenen Grill-Restaurant "El Toro Nero" gibt es die brasilianische Spezialität "Rodizio". Die Teilnahme zur Oracle SE University inklusive dem gemeinsamen Abendessen ist für Sie kostenfrei, die Übernachtungskosten werden von Ihnen selbst getragen. Melden Sie sich bitte bis spätestens 30.11. zur Oracle SE University hier an. Wir haben ein Zimmer-Kontingent reserviert, die Buchung bitten wir Sie online selbst vorzunehmen. Bitte geben Sie bei der Buchung das Stichwort „Oracle SE Uni“ an, damit erhalten Sie den Sonderpreis von 102,- Euro inklusive Frühstück. Wir freuen uns, Sie in Fulda zu begrüßen! Joachim Hissmann Birgit Nehring Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Manager Key Partner HW Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Direktor Software & Solution Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Deutschland Tech Data/Azlan ================================================================= Kontakte Azlan:Peter MosbauerTel.: 089 [email protected] Robert BacciTel.: 089 4700-3018 [email protected] Oracle:Regina SteyerTel.: 0211 [email protected]

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • How do I dig myself out of this DEEP hole? [closed]

    - by user74847
    I may be a bit bias in the way i word this but any opinions and suggestions are welcome. I should start by saying i have a MSc in CS and a degree in new media +6 years expereince and im probably around a middleweight developer. I started a web development company with my friend from uni a year ago, there was a 4 month gap in the middle where i went miles away work on a big project. Ive since returned and picked up where we left off. A year on though i find im still staying up til 5am and getting up at 9 sometimes 2-3 days without sleep. While i was away i was working 9-5 and struggling to keep up with doing stuff for my clients 8 hours ahead, after work, so things stagnated. We currently have about 12 active projects, with one other part time developer and a full time freelancer who is dealing with one of our major projects. I am solely responsible for concurrently developing 2 big sites similar to gumtree in functionality, at the same time as about 5-6+ small WordPress based 5-10page sites. a lot of the content isnt in yet or the client is delaying so i chop and change project every other day which does my head in. Is it reasonable to expect myself to remember the intricate details of each project when i come back to it a week later? and remember the details of a task which hasnt been written down? my business partner seems to think so. or am i just forgetful? Im particularly bad at estimating timescales which doesnt help, added to that a lot of the technologies im am using are new to me (a magento site took weeks to theme rather than days and was full of bugs, even after 1000's of google searches and hours reading forums) im still trying to learn and find the best CMS for us to use and getting my head around the likes of Bootstrap and jquery, Cpanel / Linux (we just got a blank vps for me to set up with no experience) even installing an SSL certificate caused everyone's mail clients to go down which was more stress for me to sort out. I find the pressure of the workload and timescales and trying to learn this stuff so fast is beginning to turn me against my career path. The fact that i never seem to get anything done really winds up my business partner and iv come to associate him with the stress and pain of the whole situation especially when I get berated or a look that says "oh you retard" when I forget something. Even today i spent hours learning how a particular themeforest theme worked with wordpress and how i could twist it to work for our partiuclar needs, on the surface had done no work, that triggered a 30 minute tirade of anger and stress and questioning what i had done from my business partner. had i taken too long to work on that? shoudl i have done it in 2 hours instead of 6? i told him i would take 2 hours. i was wrong. I feel like im running myself into the ground. My sleeping pattern has got so bad that when im working im half asleep and making mistakes, my eyes are constantly purple underneath, i literally fall asleep at my desk, its affecting my social life too, ive not slept more than lightly for the last year and grind through impossible code puzzles in my half sleep wich keeps me awake, when im already exhausted. plus the work is rushed and buggy when it does get done so drags on into the next project. I also procrastinate quite badly, pacing the livingroom, looking out the window when Im alone for three days straight in the flat and start to get cabin fever which means i do even less work and the negative feedback loop continues. I get told im the only one with the problem when i say that i cant work from home any more, and examples of other freelancers get brought up. an office wouldnt bring any extra cash in to the company but im convinced having that moving more than 2 meters away from my bed to go to "work" would get me working, at the moment i feel guilty like i should be working 24-7. It is important that we do all this work to raise enough cash to get our business to the next level but every month still feels like a struggle to pay the rent (there is about £20K coming in by Jan) and i have to borrow money from friends often to buy food or get a taxi to a meeting, so it is vital the money keeps coming in. (im also 20 mins late for nearly all meetings but thats a different issue) have you experienced anything similar? how can i deal with the issues ive raised? is it realistic to develop 10 sites at once? how can i improve my relationship with my business partner? do you struggle to work at home? how do you deal with that? i think if i dont get my life on track by feb i will seriously consider giving it all up, but that seems like such a waste. any ideas!!? i need help! Thanks.

    Read the article

  • What is New in ASP.NET 4.0 Code Access Security

    - by Xiaohong
    ASP.NET Code Access Security (CAS) is a feature that helps protect server applications on hosting multiple Web sites, ASP.NET lets you assign a configurable trust level that corresponds to a predefined set of permissions. ASP.NET has predefined ASP.NET Trust Levels and Policy Files that you can assign to applications, you also can assign custom trust level and policy files. Most web hosting companies run ASP.NET applications in Medium Trust to prevent that one website affect or harm another site etc. As .NET Framework's Code Access Security model has evolved, ASP.NET 4.0 Code Access Security also has introduced several changes and improvements. The main change in ASP.NET 4.0 CAS In ASP.NET v4.0 partial trust applications, application domain can have a default partial trust permission set as opposed to being full-trust, the permission set name is defined in the <trust /> new attribute permissionSetName that is used to initialize the application domain . By default, the PermissionSetName attribute value is "ASP.Net" which is the name of the permission set you can find in all predefined partial trust configuration files. <trust level="Something" permissionSetName="ASP.Net" /> This is ASP.NET 4.0 new CAS model. For compatibility ASP.NET 4.0 also support legacy CAS model where application domain still has full trust permission set. You can specify new legacyCasModel attribute on the <trust /> element to indicate whether the legacy CAS model is enabled. By default legacyCasModel is false which means that new 4.0 CAS model is the default. <trust level="Something" legacyCasModel="true|false" /> In .Net FX 4.0 Config directory, there are two set of predefined partial trust config files for each new CAS model and legacy CAS model, trust config files with name legacy.XYZ.config are for legacy CAS model: New CAS model: Legacy CAS model: web_hightrust.config legacy.web_hightrust.config web_mediumtrust.config legacy.web_mediumtrust.config web_lowtrust.config legacy.web_lowtrust.config web_minimaltrust.config legacy.web_minimaltrust.config   The figure below shows in ASP.NET 4.0 new CAS model what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:    There also some benefits that comes with the new CAS model: You can lock down a machine by making all managed code no-execute by default (e.g. setting the MyComputer zone to have no managed execution code permissions), it should still be possible to configure ASP.NET web applications to run as either full-trust or partial trust. UNC share doesn’t require full trust with CASPOL at machine-level CAS policy. Side effect that comes with the new CAS model: processRequestInApplicationTrust attribute is deprecated  in new CAS model since application domain always has partial trust permission set in new CAS model.   In ASP.NET 4.0 legacy CAS model or ASP.NET 2.0 CAS model, even though you assign partial trust level to a application but the application domain still has full trust permission set. The figure below shows in ASP.NET 4.0 legacy CAS model (or ASP.NET 2.0 CAS model) what permission set to grant to code for partial trust application using predefined partial trust levels and policy files:     What $AppDirUrl$, $CodeGen$, $Gac$ represents: $AppDirUrl$ The application's virtual root directory. This allows permissions to be applied to code that is located in the application's bin directory. For example, if a virtual directory is mapped to C:\YourWebApp, then $AppDirUrl$ would equate to C:\YourWebApp. $CodeGen$ The directory that contains dynamically generated assemblies (for example, the result of .aspx page compiles). This can be configured on a per application basis and defaults to %windir%\Microsoft.NET\Framework\{version}\Temporary ASP.NET Files. $CodeGen$ allows permissions to be applied to dynamically generated assemblies. $Gac$ Any assembly that is installed in the computer's global assembly cache (GAC). This allows permissions to be granted to strong named assemblies loaded from the GAC by the Web application.   The new customization of CAS Policy in ASP.NET 4.0 new CAS model 1. Define which named permission set in partial trust configuration files By default the permission set that will be assigned at application domain initialization time is the named "ASP.Net" permission set found in all predefined partial trust configuration files. However ASP.NET 4.0 allows you set PermissionSetName attribute to define which named permission set in a partial trust configuration file should be the one used to initialize an application domain. Example: add "ASP.Net_2" named permission set in partial trust configuration file: <PermissionSet class="NamedPermissionSet" version="1" Name="ASP.Net_2"> <IPermission class="FileIOPermission" version="1" Read="$AppDir$" PathDiscovery="$AppDir$" /> <IPermission class="ReflectionPermission" version="1" Flags ="RestrictedMemberAccess" /> <IPermission class="SecurityPermission " version="1" Flags ="Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /></PermissionSet> Then you can use "ASP.Net_2" named permission set for the application domain permission set: <trust level="Something" legacyCasModel="false" permissionSetName="ASP.Net_2" /> 2. Define a custom set of Full Trust Assemblies for an application By using the new fullTrustAssemblies element to configure a set of Full Trust Assemblies for an application, you can modify set of partial trust assemblies to full trust at the machine, site or application level. The configuration definition is shown below: <fullTrustAssemblies> <add assemblyName="MyAssembly" version="1.1.2.3" publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies> 3. Define <CodeGroup /> policy in partial trust configuration files ASP.NET 4.0 new CAS model will retain the ability for developers to optionally define <CodeGroup />with membership conditions and assigned permission sets. The specific restriction in ASP.NET 4.0 new CAS model though will be that the results of evaluating custom policies can only result in one of two outcomes: either an assembly is granted full trust, or an assembly is granted the partial trust permission set currently associated with the running application domain. It will not be possible to use custom policies to create additional custom partial trust permission sets. When parsing the partial trust configuration file: Any assemblies that match to code groups associated with "PermissionSet='FullTrust'" will run at full trust. Any assemblies that match to code groups associated with "PermissionSet='Nothing'" will result in a PolicyError being thrown from the CLR. This is acceptable since it provides administrators with a way to do a blanket-deny of managed code followed by selectively defining policy in a <CodeGroup /> that re-adds assemblies that would be allowed to run. Any assemblies that match to code groups associated with other permissions sets will be interpreted to mean the assembly should run at the permission set of the appdomain. This means that even though syntactically a developer could define additional "flavors" of partial trust in an ASP.NET partial trust configuration file, those "flavors" will always be ignored. Example: defines full trust in <CodeGroup /> for my strong named assemblies in partial trust config files: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="Nothing"> <IMembershipCondition    class="AllMembershipCondition"    version="1" /> <CodeGroup    class="UnionCodeGroup"    version="1"    PermissionSetName="FullTrust"    Name="My_Strong_Name"    Description="This code group grants code signed full trust. "> <IMembershipCondition      class="StrongNameMembershipCondition" version="1"       PublicKeyBlob="hex_char_representation_of_key_blob" /> </CodeGroup> <CodeGroup   class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$AppDirUrl$/*" /> </CodeGroup> <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="ASP.Net">   <IMembershipCondition class="UrlMembershipCondition" version="1" Url="$CodeGen$/*"   /> </CodeGroup></CodeGroup>   4. Customize CAS policy at runtime in ASP.NET 4.0 new CAS model ASP.NET 4.0 new CAS model allows to customize CAS policy at runtime by using custom HostSecurityPolicyResolver that overrides the ASP.NET code access security policy. Example: use custom host security policy resolver to resolve partial trust web application bin folder MyTrustedAssembly.dll to full trust at runtime: You can create a custom host security policy resolver and compile it to assembly MyCustomResolver.dll with strong name enabled and deploy in GAC: public class MyCustomResolver : HostSecurityPolicyResolver{ public override HostSecurityPolicyResults ResolvePolicy(Evidence evidence) { IEnumerator hostEvidence = evidence.GetHostEnumerator(); while (hostEvidence.MoveNext()) { object hostEvidenceObject = hostEvidence.Current; if (hostEvidenceObject is System.Security.Policy.Url) { string assemblyName = hostEvidenceObject.ToString(); if (assemblyName.Contains(“MyTrustedAssembly.dll”) return HostSecurityPolicyResult.FullTrust; } } //default fall-through return HostSecurityPolicyResult.DefaultPolicy; }} Because ASP.NET accesses the custom HostSecurityPolicyResolver during application domain initialization, and a custom policy resolver requires full trust, you also can add a custom policy resolver in <fullTrustAssemblies /> , or deploy in the GAC. You also need configure a custom HostSecurityPolicyResolver instance by adding the HostSecurityPolicyResolverType attribute in the <trust /> element: <trust level="Something" legacyCasModel="false" hostSecurityPolicyResolverType="MyCustomResolver, MyCustomResolver" permissionSetName="ASP.Net" />   Note: If an assembly policy define in <CodeGroup/> and also in hostSecurityPolicyResolverType, hostSecurityPolicyResolverType will win. If an assembly added in <fullTrustAssemblies/> then the assembly has full trust no matter what policy in <CodeGroup/> or in hostSecurityPolicyResolverType.   Other changes in ASP.NET 4.0 CAS Use the new transparency model introduced in .Net Framework 4.0 Change in dynamically compiled code generated assemblies by ASP.NET: In new CAS model they will be marked as security transparent level2 to use Framework 4.0 security transparent rule that means partial trust code is treated as completely Transparent and it is more strict enforcement. In legacy CAS model they will be marked as security transparent level1 to use Framework 2.0 security transparent rule for compatibility. Most of ASP.NET products runtime assemblies are also changed to be marked as security transparent level2 to switch to SecurityTransparent code by default unless SecurityCritical or SecuritySafeCritical attribute specified. You also can look at Security Changes in the .NET Framework 4 for more information about these security attributes. Support conditional APTCA If an assembly is marked with the Conditional APTCA attribute to allow partially trusted callers, and if you want to make the assembly both visible and accessible to partial-trust code in your web application, you must add a reference to the assembly in the partialTrustVisibleAssemblies section: <partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" />/partialTrustVisibleAssemblies>   Most of ASP.NET products runtime assemblies are also changed to be marked as conditional APTCA to prevent use of ASP.NET APIs in partial trust environments such as Winforms or WPF UI controls hosted in Internet Explorer.   Differences between ASP.NET new CAS model and legacy CAS model: Here list some differences between ASP.NET new CAS model and legacy CAS model ASP.NET 4.0 legacy CAS model  : Asp.net partial trust appdomains have full trust permission Multiple different permission sets in a single appdomain are allowed in ASP.NET partial trust configuration files Code groups Machine CAS policy is honored processRequestInApplicationTrust attribute is still honored    New configuration setting for legacy model: <trust level="Something" legacyCASModel="true" ></trust><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>   ASP.NET 4.0 new CAS model: ASP.NET will now run in homogeneous application domains. Only full trust or the app-domain's partial trust grant set, are allowable permission sets. It is no longer possible to define arbitrary permission sets that get assigned to different assemblies. If an application currently depends on fine-tuning the partial trust permission set using the ASP.NET partial trust configuration file, this will no longer be possible. processRequestInApplicationTrust attribute is deprecated Dynamically compiled assemblies output by ASP.NET build providers will be updated to explicitly mark assemblies as transparent. ASP.NET partial trust grant sets will be independent from any enterprise, machine, or user CAS policy levels. A simplified model for locking down web servers that only allows trusted managed web applications to run. Machine policy used to always grant full-trust to managed code (based on membership conditions) can instead be configured using the new ASP.NET 4.0 full-trust assembly configuration section. The full-trust assembly configuration section requires explicitly listing each assembly as opposed to using membership conditions. Alternatively, the membership condition(s) used in machine policy can instead be re-defined in a <CodeGroup /> within ASP.NET's partial trust configuration file to grant full-trust.   New configuration setting for new model: <trust level="Something" legacyCASModel="false" permissionSetName="ASP.Net" hostSecurityPolicyResolverType=".NET type string" ></trust><fullTrustAssemblies> <add assemblyName=”MyAssembly” version=”1.0.0.0” publicKey="hex_char_representation_of_key_blob" /></fullTrustAssemblies><partialTrustVisibleAssemblies> <add assemblyName="MyAssembly" publicKey="hex_char_representation_of_key_blob" /></partialTrustVisibleAssemblies>     Hope this post is helpful to better understand the ASP.Net 4.0 CAS. Xiaohong Tang ASP.NET QA Team

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • HTML5 CSS3 layout not working

    - by John.Weland
    I have been asked by a local MMA (Mixed Martial Arts) School to help them develop a website. For the life of me I CANNOT get the layout to work correctly. When I get one section set where it should be another moves out of place! here is a pic of the layout: here The header should be a set height as should the footer the entire site at its widest point should be 1250px with the header/content area/footer and the like being 1240px the black in the picture is a scaling background to expand wider as larger resolution systems are viewing them. The full site should be a minimum-height of 100% but scale virtually as content in the target area deems necessary. My biggest issue currently is that my "sticky" footer doesn't stick once the content has stretched the content target area virtually. the Code is not pretty but here it is: HTML5 <!doctype html> <html> <head> <link rel="stylesheet" href="menu.css" type="text/css" media="screen"> <link rel="stylesheet" href="master.css" type="text/css" media="screen"> <meta charset="utf-8"> <title>Untitled Document</title> </head> <body bottommargin="0" leftmargin="0" rightmargin="0" topmargin="0"> <div id="wrap" class="wrap"><div id="logo" class="logo"><img src="images/comalogo.png" width="100" height="150"></div> <div id="header" class="header">College of Martial Arts</div> <div id="nav" class="nav"> <ul id="menu"><b> <li><a href="#">News</a></li> <li>·</li> <li><a href="#">About Us</a> <ul> <li><a href="#">The Instructors</a></li> <li><a href="#">Our Arts</a></li> </li> </ul> <li>·</li> <li><a href="#">Location</a></li> <li>·</li> <li><a href="#">Gallery</a></li> <li>·</li> <li><a href="#">MMA.tv</a></li> <li>·</li> <li><a href="#">Schedule</a></li> <li>·</li> <li><a href="#">Fight Gear</a></li></b> </div> <div id="social" class="social"> <a href="http://www.facebook.com/pages/Canyon-Lake-College-of-Martial-Arts/189432551104674"><img src="images/soc/facebook.png"></a> <a href="https://twitter.com/#!/CanyonLakeMMA"><img src="images/soc/twitter.png"></a> <a href="https://plus.google.com/108252414577423199314/"><img src="images/soc/google+.png"></a> <a href="http://youtube.com/user/clmmatv"><img src="images/soc/youtube.png"></a></div> <div id="mid" class="mid">test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br>test <br></div> <div id="footer" class="footer"> <div id="contact" style="left:0px;">tel: (830) 214-4591<br /> e: [email protected]<br /> add: 1273 FM 2673, Sattler, TX 78133<br /> </div> <div id="affiliates" style="right:0px;">Hwa Rang World Tang soo Do</div> <div id="copyright">Copyright © College of Martial Arts</div> </div> </body> </html> CSS3 -Dropdown Menu- @charset "utf-8"; /* CSS Document */ /* Main */ #menu { width: 100%; margin: 0; padding: 10px 0 0 0; list-style: none; background: #444; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear,left bottom,left top,color-stop(0, #444),color-stop(1, #000)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); -moz-border-radius: 5px; border-radius: 5px; -moz-box-shadow: 0 2px 1px #9c9c9c; -webkit-box-shadow: 0 2px 1px #9c9c9c; box-shadow: 0 8px 8px #9c9c9c; /* outline:#000 solid thin; */ } #menu li { left:150px; float: left; padding: 0 0 10px 0; position:relative; color: #FC0; font-size:15px; font-family:'freshman' cursive; line-height:15px; } #menu a { float: left; height: 15px; line-height:15px; padding: 0 10px; color: #FC0; font-size:15px; text-decoration: none; text-shadow: 1 1px 0 #000; text-align:center; } #menu li:hover > a { color: #fafafa; } *html #menu li a:hover /* IE6 */ { color: #fafafa; } #menu li:hover > ul { display: block; } /* Sub-menu */ #menu ul { list-style: none; margin: 0; padding: 0; display: none; position: absolute; top: 25px; left: 0; z-index: 99999; background: #444; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear,left bottom,left top,color-stop(0, #111),color-stop(1, #444)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); -moz-border-radius: 5px; border-radius: 5px; /* outline:#000 solid thin; */ } #menu ul li { left:0; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } #menu ul a { padding: 10px; height: auto; line-height: 1; display: block; white-space: nowrap; float: none; text-transform: none; } *html #menu ul a /* IE6 */ { height: 10px; width: 200px; } *:first-child+html #menu ul a /* IE7 */ { height: 10px; width: 200px; } /*#menu ul a:hover { background: #000; background: -moz-linear-gradient(#000, #333); background: -webkit-gradient(linear, left top, left bottom, from(#04acec), to(#0186ba)); background: -webkit-linear-gradient(#000, #333); background: -o-linear-gradient(#000, #333); background: -ms-linear-gradient(#000, #333); background: linear-gradient(#000, #333); }*/ /* Clear floated elements */ #menu:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; } * html #menu { zoom: 1; } /* IE6 */ *:first-child+html #menu { zoom: 1; } /* IE7 */ CSS3 -Master Style Sheet- @charset "utf-8"; /* CSS Document */ a:link {color:#FC0; text-decoration:none;} /* unvisited link */ a:visited {color:#FC0; text-decoration:none;} /* visited link */ a:hover {color:#FFF; text-decoration:none;} /* mouse over link */ a:active {color:#FC0; text-decoration:none;} /* selected link */ ul.a {list-style-type:none;} ul.b {list-style-type:inherit} html { } body { /*background-image:url(images/cagebg.jpg);*/ background-repeat:repeat; background-position:top; } div.wrap { margin: 0 auto; min-height: 100%; position: relative; width: 1250px; } div.logo{ top:25px; left:20px; position:absolute; float:top; height:150px; } /*Freshman FONT is on my computer needs to be uploaded to the webhost and rendered host side like a webfont*/ div.header{ background-color:#999; color:#FC0; margin-left:5px; height:80px; width:1240px; line-height:70px; font-family:'freshman' cursive; font-size:50px; text-shadow:8px 8px #9c9c9c; text-outline:1px 1px #000; text-align:center; background-color:#999; clear: both; } div.social{ height:50px; margin-left:5px; width:1240px; font-family:'freshman' cursive; font-size:50px; text-align:right; color:#000; background-color:#999; line-height:30px; box-sizing: border-box; ms-box-sizing: border-box; webkit-box-sizing: border-box; moz-box-sizing: border-box; padding-right:5px; } div.mid{ position:absolute; min-height:100%; margin-left:5px; width:1240px; font-family:'freshman' cursive; font-size:50px; text-align:center; color:#000; background-color:#999; } /*SIDE left and right should be 40px wide and a minimum height (100% the area from nav-footer) to fill between the NAV and the footer yet stretch as displayed content streatches the page longer (scrollable)*/ div #side.sright{ top:96px; right:0; position:absolute; float:right; height:100%; min-height:100%; width:40px; background-image:url(images/border.png); } /*Container should vary in height in acordance to content displayed*/ div #content.container{ } /*Footer should stick at ABSOLUTE BOTTOM of the page*/ div #footer{ font-family:'freshman' cursive; position:fixed; bottom:0; background-color:#000000; margin-left:5px; width:1240px; color:#FC0; clear: both; /*this clear property forces the .container to understand where the columns end and contain them*/ } /*HTML 5 support - Sets new HTML 5 tags to display:block so browsers know how to render the tags properly.*/ header, section, footer, aside, nav, article, figure { display: block; } Eventually once the layout is correct I have to use PHP to make calls for where data should be displayed from what database. If anyone can help me to fix this layout and clean up the crap code, I'd be much appreciated.. I've spent weeks trying to figure this out.

    Read the article

  • "No message serializer has been configured" error when starting NServiceBus endpoint

    - by SteveBering
    My GenericHost hosted service is failing to start with the following message: 2010-05-07 09:13:47,406 [1] FATAL NServiceBus.Host.Internal.GenericHost [(null)] <(null) - System.InvalidOperationException: No message serializer has been con figured. at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.CheckConfiguration() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\ MsmqTransport.cs:line 241 at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\MsmqTransport .cs:line 211 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start(Action startupAction) in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Uni cast\UnicastBus.cs:line 694 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Unicast\UnicastBus.cs:l ine 665 at NServiceBus.Host.Internal.GenericHost.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\host\NServiceBus.Host\Internal\GenericHost.cs:line 77 My endpoint configuration looks like: public class ServiceEndpointConfiguration : IConfigureThisEndpoint, AsA_Publisher, IWantCustomInitialization { public void Init() { // build out persistence infrastructure var sessionFactory = Bootstrapper.InitializePersistence(); // configure NServiceBus infrastructure var container = Bootstrapper.BuildDependencies(sessionFactory); // set up logging log4net.Config.XmlConfigurator.Configure(); Configure.With() .Log4Net() .UnityBuilder(container) .XmlSerializer(); } } And my app.config looks like: <configSections> <section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" /> <section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" /> <section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" requirePermission="false" /> </configSections> <Logging Threshold="DEBUG" /> <MsmqTransportConfig InputQueue="NServiceBus.ServiceInput" ErrorQueue="NServiceBus.Errors" NumberOfWorkerThreads="1" MaxRetries="2" /> <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress="" ForwardReceivedMessagesTo="NServiceBus.Auditing"> <MessageEndpointMappings> <!-- publishers don't need to set this for their own message types --> </MessageEndpointMappings> </UnicastBusConfig> <connectionStrings> <add name="Db" connectionString="Data Source=..." providerName="System.Data.SqlClient" /> </connectionStrings> <log4net debug="true"> <root> <level value="INFO"/> </root> <logger name="NHibernate"> <level value="ERROR" /> </logger> </log4net> This has worked in the past, but seems to be failing when the generic host starts. My endpoint configuration is below, along with the app.config for the service. What is strange is that in my endpoint configuration, I am specifying to use the XmlSerializer for message serialization. I don't see any other errors in the console output preceding the error message. What am I missing? Thanks, Steve

    Read the article

  • JSON Feed Appears to be XHR when it should be JS

    - by Oscar Godson
    I don't get why it'd doing this with the 2nd feed (appearing as a XHR call rather than just JS [looking at it in Firefox/Firebug]). The 2nd feed has the exact same MIME type as Flickr's JSON feed, yet the PortlandOregon.gov one shows as XHR and i get a NULL callback when using $.getJSON and if i use $.ajax with a 'json' or 'jsonp' type i get nothing at all. If i do the Flickr one i get the normal "[object Object]" callback. Whats going on? Please help! This has been such a headache for about a week. And i have authorization to change the feed, but i have to request the change, so if anyone knows for absolute sure let me know that! Response Headers from Flickr's API ( http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=? ) [JS]: Date Mon, 15 Mar 2010 21:56:06 GMT P3P policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Mon, 15 Mar 2010 21:52:17 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Vary Accept-Encoding Content-Encoding gzip Content-Length 3647 Connection close Content-Type application/x-javascript; charset=utf-8 Request Headers Host api.flickr.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Cookie BX=4lflj455amesp&b=3&s=iv; fltoto=0%2C0%2C0%2C0%2C1%2C0%3B0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%3B1%3B0%3B; search_z=t; localization=en-us%3Bus%3Bus PortlandOregon.gov ( http://www.portlandonline.com/shared/cfm/json.cfm?c=27321 ) [XHR]: Response Headers Connection close Date Mon, 15 Mar 2010 21:57:49 GMT Server Microsoft-IIS/6.0 Set-Cookie CONTACT_ID=0;path=/ LAST_USER=;path=/ BIGipServercgis_pol_web_pool-http=1191537418.20480.0000; path=/ Content-Type application/x-javascript; charset=utf-8 Request Headers Host www.portlandonline.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Origin http://oscargodson.com

    Read the article

  • RMI-applets - Cannot understand error message

    - by aeter
    In a simple RMI game I'm writing (an assignment in uni), I reveice: java.rmi.MarshalException: error marshalling arguments; nested exception is: java.net.SocketException: Broken pipe at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:138) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178) at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132) at $Proxy2.drawWorld(Unknown Source) at PlayerServerImpl$1.actionPerformed(PlayerServerImpl.java:180) at javax.swing.Timer.fireActionPerformed(Timer.java:271) at javax.swing.Timer$DoPostEvent.run(Timer.java:201) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) The error message appears after the second Player is registered with the RMI Server and the server starts to send the image (the array of pixels) to the 2 applets. The PlayerImpl and the PlayerServerImpl both extend UnicastRemoteObject. I have been struggling with other error messages for some time now, but I cannot understand how to troubleshoot this one. Please help. The relevant parts of the code are: PlayerServerImpl.java: ... timer = new Timer(10, new ActionListener() { // every 10 milliseconds do: @Override public void actionPerformed(ActionEvent e) { ... BufferedImage buff_image = new BufferedImage(GAME_APPLET_WIDTH, GAME_APPLET_HEIGHT, BufferedImage.TYPE_INT_RGB); // create a graphics context on the buffered image Graphics buff_g = buff_image.createGraphics(); ... // draw the score somewhere on the screen buff_g.drawString(score, GAME_APPLET_WIDTH - 20, 10); ... int[] rgbs = new int[GAME_APPLET_WIDTH * GAME_APPLET_HEIGHT]; int imgPixelsGrabbed[] = buff_image.getRGB(0,0,GAME_APPLET_WIDTH,GAME_APPLET_HEIGHT,rgbs,0,GAME_APPLET_WIDTH); // send the new state to the applets for (Player player : players) { player.drawWorld(imgPixelsGrabbed); System.out.println("Sent image to player"); } PlayerImpl.java: private PlayerApplet applet; public PlayerImpl(PlayerApplet applet) throws RemoteException { super(); this.applet = applet; } ... @Override public void drawWorld(int[] imgPixelsGrabbed) throws RemoteException { applet.setWorld(imgPixelsGrabbed); applet.repaint(); } ... PlayerApplet.java: ... private int[] world; // an array of pixels for the new image to be drawn ... // register players player = new PlayerImpl(applet); String serverIPAddressPort = ipAddressField.getText(); if (validateIPAddressPort(serverIPAddressPort)) { server = (PlayerServer) Naming.lookup("rmi://" + serverIPAddressPort + "/PlayerServer"); server.register(player); idPlayer = server.sendPlayerID(); ... @Override public void update(Graphics g) { buff_img = createImage((ImageProducer) new MemoryImageSource(getWidth(), getHeight(), world, 0, getWidth())); Graphics gr = buff_img.getGraphics(); paint(gr); g.drawImage(buff_img, 0, 0, this); } public void setWorld(int[] world) { this.world = world; }

    Read the article

  • C Programming - My program is good enough for my assignment but I know its not good

    - by Joe
    Hi there I'm just starting an assignment for uni and it's raised a question for me. I don't understand how to return a string from a function without having a memory leak. char* trim(char* line) { int start = 0; int end = strlen(line) - 1; /* find the start position of the string */ while(isspace(line[start]) != 0) { start++; } //printf("start is %d\n", start); /* find the position end of the string */ while(isspace(line[end]) != 0) { end--; } //printf("end is %d\n", end); /* calculate string length and add 1 for the sentinel */ int len = end - start + 2; /* initialise char array to len and read in characters */ int i; char* trimmed = calloc(sizeof(char), len); for(i = 0; i < (len - 1); i++) { trimmed[i] = line[start + i]; } trimmed[len - 1] = '\0'; return trimmed; } as you can see I am returning a pointer to char which is an array. I found that if I tried to make the 'trimmed' array by something like: char trimmed[len]; then the compiler would throw up a message saying that a constant was expected on this line. I assume this meant that for some reason you can't use variables as the array length when initialising an array, although something tells me that can't be right. So instead I made my array by allocating some memory to a char pointer. I understand that this function is probably waaaaay sub-optimal for what it is trying to do, but what I really want to know is: 1. Can you normally initialise an array using a variable to declare the length like: char trimmed[len]; ? 2. If I had an array that was of that type (char trimmed[]) would it have the same return type as a pointer to char (ie char*). 3. If I make my array by callocing some memory and allocating it to a char pointer, how do I free this memory. It seems to me that once I have returned this array, I can't access it to free it as it is a local variable. Many thanks in advance Joe

    Read the article

  • Rails Joins and include columns from joins table

    - by seth.vargo
    I don't understand how to get the columns I want from rails. I have two models - A User and a Profile. A User :has_many Profile (because users can revert back to an earlier version of their profile): > DESCRIBE users; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | username | varchar(255) | NO | UNI | NULL | | | password | varchar(255) | NO | | NULL | | | last_login | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+   > DESCRIBE profiles; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_id | int(11) | NO | MUL | NULL | | | first_name | varchar(255) | NO | | NULL | | | last_name | varchar(255) | NO | | NULL | | | . . . . . . | | . . . . . . | | . . . . . . | +----------------+--------------+------+-----+---------+----------------+ In SQL, I can run the query: > SELECT * FROM profiles JOIN users ON profiles.user_id = users.id LIMIT 1; +----+-----------+----------+---------------------+---------+---------------+-----+ | id | username | password | last_login | user_id | first_name | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ | 1 | john | ****** | 2010-12-30 18:04:28 | 1 | John | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ See how I get all the columns for BOTH tables JOINED together? However, when I run this same query in Rails, I don't get all the columns I want - I only get those from Profile: # in rails console >> p = Profile.joins(:user).limit(1) >> [#<Profile ...>] >> p.first_name >> NoMethodError: undefined method `first_name' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in `method_missing' from (irb):8 # I do NOT want to do this (AKA I do NOT want to use "includes") >> p.user >> NoMethodError: undefined method `user' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in method_missing' from (irb):9 I want to (efficiently) return an object that has all the properties of Profile and User together. I don't want to :include the user because it doesn't make sense. The user should always be part of the most recent profile as if they were fields within the Profile model. How do I accomplish this?

    Read the article

  • Jquery Autocomplete after space press

    - by Limpep
    I am having an issue with my auto-complete feature such as when a user presses the space button the auto-complete doesn't show up again. Here is my code script type="text/javascript"> function lookup(inputString) { if(inputString.length == 0) { // Hide the suggestion box. $('#suggestions').hide(); } else { $.post("autocomplete.php", { queryString: ""+inputString+"" }, function(data){ if(data.length >0) { $('#suggestions').show(); $('#autoSuggestionsList').html(data); } }); } } // lookup function fill(thisValue) { $('#tag').val(thisValue); setTimeout("$('#suggestions').hide();", 200); } here my php code <?php require_once('config.php'); $db = new mysqli(DB_HOST, DB_USER, DB_PASSWORD,DB_DATABASE); if(!$db) { // Show error if we cannot connect. echo 'ERROR: Could not connect to the database.'; } else { // Is there a posted query string? if(isset($_POST['queryString'])) { $queryString = $db->real_escape_string($_POST['queryString']); // Is the string length greater than 0? if(strlen($queryString) >0) { // Run the query: We use LIKE '$queryString%' // The percentage sign is a wild-card, in my example of countries it works like this... // $queryString = 'Uni'; // Returned data = 'United States, United Kindom'; $query = $db->query("SELECT name FROM tag WHERE name LIKE '$queryString%' ORDER BY name LIMIT 10"); if($query) { // While there are results loop through them - fetching an Object (i like PHP5 btw!). while ($result = $query ->fetch_object()) { // Format the results, im using <li> for the list, you can change it. // The onClick function fills the textbox with the result. echo '<li onClick="fill(\''.$result->name.'\');">'.$result->name.'</li>'; } } else { echo 'ERROR: There was a problem with the query.'; } } else { // Dont do anything. } // There is a queryString. } else { echo 'There should be no direct access to this script!'; } } ? Any help would be great, thanks.

    Read the article

  • What can I send back to the browser while I wait for PHP execution?

    - by Matt Malesky
    So....I have a PHP page that involves a lot of backend execution, namely 'exec' calls to run shell commands on the host server. This can take upwards of a few minutes depending on the calls involved. (If you look below, each recursion through the exec calls is mounting a LUN; I'd like to sometimes do upwards of 100 per execution.) I'm curious on what I can do to send content back to the browser (and prevent it from timing out). <!DOCTYPE html> <html> <head> <title>sfvmtk</title> </head> <body> <?php // TEMPORARY VARIABLES FOR TESTING $hba = 'vmhba38'; $svip = '10.10.20.100'; $targets = array ( 0 => array ( 'iqn' => 'iqn.2010-01.com.sf:t5np.esxtest.41', 'account' => 'esx', 'isecret' => 'isecret00000', 'tsecret' => 'tsecret00000' ), 1 => array ( 'iqn' => 'iqn.2010-01.com.sf:t5np.esxtest2.42', 'account' => 'esx2', 'isecret' => 'isecret00001', 'tsecret' => 'tsecret00001' ) ); $hostname = $_REQUEST['hostname']; $username = $_REQUEST['username']; $password = $_REQUEST['password']; foreach ($targets as $ctarget) { exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter discovery statictarget add -A '.$hba.' -a '.$svip.' -n '.$ctarget['iqn'], $out); exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter target portal auth chap set -A '.$hba.' -a '.$svip.' -N '.$ctarget['account'].' -d uni -l required -n '.$ctarget['iqn'].' -S '.$ctarget['isecret'], $out); exec('esxcli -s '.$hostname.' -u '.$username.' -p '.$password.' iscsi adapter target portal auth chap set -A '.$hba.' -a '.$svip.' -N '.$ctarget['account'].' -d mutual -l required -n '.$ctarget['iqn'].' -S '.$ctarget['tsecret'], $out); } exec('vicfg-rescan --server '.$hostname.' --username '.$username.' --password '.$password.' '.$hba, $out); ?> </body> </html>

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • The last MVVM you'll ever need?

    - by Nuri Halperin
    As my MVC projects mature and grow, the need to have some omnipresent, ambient model properties quickly emerge. The application no longer has only one dynamic pieced of data on the page: A sidebar with a shopping cart, some news flash on the side – pretty common stuff. The rub is that a controller is invoked in context of a single intended request. The rest of the data, even though it could be just as dynamic, is expected to appear on it's own. There are many solutions to this scenario. MVVM prescribes creating elaborate objects which expose your new data as a property on some uber-object with more properties exposing the "side show" ambient data. The reason I don't love this approach is because it forces fairly acute awareness of the view, and soon enough you have many MVVM objects laying around, and views have to start doing null-checks in order to ensure you really supplied all the values before binding to them. Ick. Just as unattractive is the ViewData dictionary. It's not strongly typed, and in both this and the MVVM approach someone has to populate these properties – n'est pas? Where does that live? With MVC2, we get the formerly-futures  feature Html.RenderAction(). The feature allows you plant a line in a view, of the format: <% Html.RenderAction("SessionInterest", "Session"); %> While this syntax looks very clean, I can't help being bothered by it. MVC was touting a very strong separation of concerns, the Model taking on the role of the business logic, the controller handling route and performing minimal view-choosing operations and the views strictly focused on rendering out angled-bracket tags. The RenderAction() syntax has the view calling some controller and invoking it inline with it's runtime rendering. This – to my taste – embeds too much  knowledge of controllers into the view's code – which was allegedly forbidden.  The one way flow "Controller Receive Data –> Controller invoke Model –> Controller select view –> Controller Hand data to view" now gets a "View calls controller and gets it's own data" which is not so one-way anymore. Ick. I toyed with some other solutions a bit, including some base controllers, special view classes etc. My current favorite though is making use of the ExpandoObject and dynamic features with C# 4.0. If you follow Phil Haack or read a bit from David Heyden you can see the general picture emerging. The game changer is that using the new dynamic syntax, one can sprout properties on an object and make use of them in the view. Well that beats having a bunch of uni-purpose MVVM's any day! Rather than statically exposed properties, we'll just use the capability of adding members at runtime. Armed with new ideas and syntax, I went to work: First, I created a factory method to enrich the focuse object: public static class ModelExtension { public static dynamic Decorate(this Controller controller, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = CodeCampBL.SessoinInterest(); result.TagUsage = CodeCampBL.TagUsage(); return result; } } This gives me a nice fluent way to have the controller add the rest of the ambient "side show" items (SessionInterest, TagUsage in this demo) and expose them all as the Model: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); dynamic result = this.Decorate(data); return View(result); } So now what remains is that my view knows to expect a dynamic object (rather than statically typed) so that the ASP.NET page compiler won't barf: <%@ Page Language="C#" Title="Ambient Demo" MasterPageFile="~/Views/Shared/Ambient.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> Notice the generic ViewPage<dynamic>. It doesn't work otherwise. In the page itself, Model.Value property contains the main data returned from the controller. The nice thing about this, is that the master page (Ambient.Master) also inherits from the generic ViewMasterPage<dynamic>. So rather than the page worrying about all this ambient stuff, the side bars and panels for ambient data all reside in a master page, and can be rendered using the RenderPartial() syntax: <% Html.RenderPartial("TagCloud", Model.SessionInterest as Dictionary<string, int>); %> Note here that a cast is necessary. This is because although dynamic is magic, it can't figure out what type this property is, and wants you to give it a type so its binder can figure out the right property to bind to at runtime. I use as, you can cast if you like. So there we go – no violation of MVC, no explosion of MVVM models and voila – right? Well, I could not let this go without a tweak or two more. The first thing to improve, is that some views may not need all the properties. In that case, it would be a waste of resources to populate every property. The solution to this is simple: rather than exposing properties, I change d the factory method to expose lambdas - Func<T> really. So only if and when a view accesses a member of the dynamic object does it load the data. public static class ModelExtension { // take two.. lazy loading! public static dynamic LazyDecorate(this Controller c, object mainValue) { dynamic result = new ExpandoObject(); result.Value = mainValue; result.SessionInterest = new Func<Dictionary<string, int>>(() => CodeCampBL.SessoinInterest()); result.TagUsage = new Func<Dictionary<string, int>>(() => CodeCampBL.TagUsage()); return result; } } Now that lazy loading is in place, there's really no reason not to hook up all and any possible ambient property. Go nuts! Add them all in – they won't get invoked unless used. This now requires changing the signature of usage on the ambient properties methods –adding some parenthesis to the master view: <% Html.RenderPartial("TagCloud", Model.SessionInterest() as Dictionary<string, int>); %> And, of course, the controller needs to call LazyDecorate() rather than the old Decorate(). The final touch is to introduce a convenience method to the my Controller class , so that the tedium of calling Decorate() everywhere goes away. This is done quite simply by adding a bunch of methods, matching View(object), View(string,object) signatures of the Controller class: public ActionResult Index() { var data = SyndicationBL.Refresh(TWEET_SOURCE_URL); return AmbientView(data); } //these methods can reside in a base controller for the solution: public ViewResult AmbientView(dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(result); } public ViewResult AmbientView(string viewName, dynamic data) { dynamic result = ModelExtension.LazyDecorate(this, data); return View(viewName, result); } The call to AmbientView now replaces any call the View() that requires the ambient data. DRY sattisfied, lazy loading and no need to replace core pieces of the MVC pipeline. I call this a good MVC day. Enjoy!

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • Entity Framework many-to-many using VB.Net Lambda

    - by bgs264
    Hello, I'm a newbie to StackOverflow so please be kind ;) I'm using Entity Framework in Visual Studio 2010 Beta 2 (.NET framework 4.0 Beta 2). I have created an entity framework .edmx model from my database and I have a handful of many-to-many relationships. A trivial example of my database schema is Roles (ID, Name, Active) Members (ID, DateOfBirth, DateCreated) RoleMembership(RoleID, MemberID) I am now writing the custom role provider (Inheriting System.Configuration.Provider.RoleProvider) and have come to write the implementation of IsUserInRole(username, roleName). The LINQ-to-Entity queries which I wrote, when SQL-Profiled, all produced CROSS JOIN statements when what I want is for them to INNER JOIN. Dim query = From m In dc.Members From r In dc.Roles Where m.ID = 100 And r.Name = "Member" Select m My problem is almost exactly described here: http://stackoverflow.com/questions/553918/entity-framework-and-many-to-many-queries-unusable I'm sure that the solution presented there works well, but whilst I studied Java at uni and I can mostly understand C# I cannot understand this Lambda syntax provided and I need to get a similar example in VB. I've looked around the web for the best part of half a day but I'm not closer to my answer. So please can somebody advise how, in VB, I can construct a LINQ statement which would do this equivalent in SQL: SELECT rm.RoleID FROM RoleMembership rm INNER JOIN Roles r ON r.ID = rm.RoleID INNER JOIN Members m ON m.ID = rm.MemberID WHERE r.Name = 'Member' AND m.ID = 101 I would use this query to see if Member 101 is in Role 3. (I appreciate I probably don't need the join to the Members table in SQL but I imagine in LINQ I'd need to bring in the Member object?) UPDATE: I'm a bit closer by using multiple methods: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim count As Integer Using dc As New CBLModel.CBLEntities Dim persons = dc.Members.Where(AddressOf myTest) count = persons.Count End Using System.Diagnostics.Debugger.Break() End Sub Function myTest(ByVal m As Member) As Boolean Return m.ID = "100" AndAlso m.Roles.Select(AddressOf myRoleTest).Count > 0 End Function Function myRoleTest(ByVal r As Role) As Boolean Return r.Name = "Member" End Function SQL Profiler shows this: SQL:BatchStarting SELECT [Extent1].[ID] AS [ID], ... (all columns from Members snipped for brevity) ... FROM [dbo].[Members] AS [Extent1] RPC:Completed exec sp_executesql N'SELECT [Extent2].[ID] AS [ID], [Extent2].[Name] AS [Name], [Extent2].[Active] AS [Active] FROM [dbo].[RoleMembership] AS [Extent1] INNER JOIN [dbo].[Roles] AS [Extent2] ON [Extent1].[RoleID] = [Extent2].[ID] WHERE [Extent1].[MemberID] = @EntityKeyValue1',N'@EntityKeyValue1 int',@EntityKeyValue1=100 SQL:BatchCompleted SELECT [Extent1].[ID] AS [ID], ... (all columns from Members snipped for brevity) ... FROM [dbo].[Members] AS [Extent1] I'm not certain why it is using sp_execsql for the inner join statement and why it's still running a select to select ALL members though. Thanks. UPDATE 2 I've written it by turning the above "multiple methods" into lambda expressions then all into one query, like this: Dim allIDs As String = String.Empty Using dc As New CBLModel.CBLEntities For Each retM In dc.Members.Where(Function(m As Member) m.ID = 100 AndAlso m.Roles.Select(Function(r As Role) r.Name = "Doctor").Count > 0) allIDs &= retM.ID.ToString & ";" Next End Using But it doesn't seem to work: "Doctor" is not a role that exists, I just put it in there for testing purposes, yet "allIDs" still gets set to "100;" The SQL in SQL Profiler this time looks like this: SELECT [Project1].* FROM ( SELECT [Extent1].*, (SELECT COUNT(1) AS [A1] FROM [dbo].[RoleMembership] AS [Extent2] WHERE [Extent1].[ID] = [Extent2].[MemberID]) AS [C1] FROM [dbo].[Members] AS [Extent1] ) AS [Project1] WHERE (100 = [Project1].[ID]) AND ([Project1].[C1] > 0) For brevity I turned the list of all the columns from the Members table into * As you can see it's just ignoring the "Role" query... :/

    Read the article

  • jQuery Map Highlight - works fine at DOM ready but failed when loaded by AJAX

    - by Michael Mao
    Hi all: This is uni assignment and I have already done some stuff. Please go to the password protected directory on : my server Enter username "uts" and password "10479475", both without quotes, into the prompt and you shall be able to see the webpage. Basically, if you hover your mouse on top of the contents in worldmap to the upperleft corner, you can see the underneath area is "highlighted" by a gray region and a red border. This is done using one jQuery plugin : at here This part works fine, however, after I use jQuery to load the specific continent map asynchronously, the newly loaded image cannot work correctly. Tested under Firebug, I can see the plugin doesn't "like" the new image cause I cannot find the canvas or other auto-generated stuff which can be founded around the worldmap. All the functionality is done in master.js, I believe you can just download a copy and check the code there. I do hope that I have followed the tutorials on the plugin's doc page, but I just cannot get through the final stage. Code used for worldmap in html: <img id="worldmap" src="./img/world.gif" alt="world.gif" width="398" height="200" class="map" usemap="#worldmap"/> <map name="worldmap"> <area class='continent' href="#" shape="poly" title="North_America" coords="1,39, 40,23, 123,13, 164,17, 159,40, 84,98, 64,111, 29,89" /> </map> Code used for worldmap in master.js //when DOM is ready, do something $(document).ready(function() { $('.map').maphilight(); //call the map highlight main function } On contrast, code used for specific continent map: //helper function to load specific continent map using AJAX function loadContinentMap(continent) { $('#continent-map-wrapper').children().remove(); //remove all children nodes first //inspiration taken from online : http://jqueryfordesigners.com/image-loading/ $('#continent-map-wrapper').append("<div id='loader' class='loading'><div>"); var img = new Image(); // wrap our new image in jQuery, then: // once the image has loaded, execute this code $(img).load(function () { $(this).hide(); // set the image hidden by default // with the holding div #loader, apply: // remove the loading class (so no background spinner), // then insert our image $('#loader').removeClass('loading').append(this); // fade our image in to create a nice effect $(this).fadeIn(); }).error(function () { // if there was an error loading the image, react accordingly // notify the user that the image could not be loaded $('#loader').removeClass('loading').append("<h1><div class='errormsg'>Loading image failed, please try again! If same error persists, please contact webmaster.</div></h1>"); }) //set a series of attributes to the img tag, these are for the map high lighting plugin. .attr('id', continent).attr('alt', '' + continent).attr('width', '576').attr('height', '300') .attr('usemap', '#city_' + continent).attr('class', 'citymap').attr('src', './img/' + continent + '.gif'); // *finally*, set the src attribute of the new image to our image //After image is loaded, apply the map highlighting plugin function again. $('.citymap').maphilight(); $('area.citymap').click(function() { alert($(this).attr('title') + ' is clicked!'); }); } Sorry about the messy code, havn't refactored it yet. I am wondering why the canvas disappers for the continent map. Did I do anything wrong. Any hint is much appreciated and thanks for any suggestion in advance!

    Read the article

  • Psychology researcher wants to learn new language

    - by user273347
    I'm currently considering R, matlab, or python, but I'm open to other options. Could you help me pick the best language for my needs? Here are the criteria I have in mind (not in order): Simple to learn. I don't really have a lot of free time, so I'm looking for something that isn't extremely complicated and/or difficult to pick up. I know some C, FWIW. Good for statistics/psychometrics. I do a ton of statistics and psychometrics analysis. A lot of it is basic stuff that I can do with SPSS, but I'd like to play around with the more advanced stuff too (bootstrapping, genetic programming, data mining, neural nets, modeling, etc). I'm looking for a language/environment that can help me run my simpler analyses faster and give me more options than a canned stat package like SPSS. If it can even make tables for me, then it'll be perfect. I also do a fair bit of experimental psychology. I use a canned experiment "programming" software (SuperLab) to make most of my experiments, but I want to be able to program executable programs that I can run on any computer and that can compile the data from the experiments in a spreadsheet. I know python has psychopy and pyepl and matlab has psychtoolbox, but I don't know which one is best. If R had something like this, I'd probably be sold on R already. I'm looking for something regularly used in academe and industry. Everybody else here (including myself, so far) uses canned stat and experiment programming software. One of the reasons I'm trying to learn a programming language is so that I can keep up when I move to another lab. Looking forward to your comments and suggestions. Thank you all for your kind and informative replies. I appreciate it. It's still a tough choice because of so many strong arguments for each language. Python - Thinking about it, I've forgotten so much about C already (I don't even remember what to do with an array) that it might be better for me to start from scratch with a simple program that does what it's supposed to do. It looks like it can do most of the things I'll need it to do, though not as cleanly as R and MATLAB. R - I'm really liking what I'm reading about R. The packages are perfect for my statistical work now. Given the purpose of R, I don't think it's suited to building psychological experiments though. To clarify, what I mean is making a program that presents visual and auditory stimuli to my specifications (hundreds of them in a preset and/or randomized sequence) and records the response data gathered from participants. MATLAB - It's awesome that cognitive and neuro folk are recommending MATLAB, because I'm preparing for the big leap from social and personality psychology to cognitive neuro. The problem is the Uni where I work doesn't have MATLAB licenses (and 3750 GBP for a compiler license is not an option for me haha). Octave looks like a good alternative. PsychToolbox is compatible with Octave, thankfully. SQL - Thanks for the tip. I'll explore that option, too. Python will be the least backbreaking and most useful in the short term. R is well suited to my current work. MATLAB is well suited to my prospective work. It's a tough call, but I think I am now equipped to make a more well-informed decision about where to go next. Thanks again!

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >