Search Results

Search found 32429 results on 1298 pages for 'project layout'.

Page 849/1298 | < Previous Page | 845 846 847 848 849 850 851 852 853 854 855 856  | Next Page >

  • Computer Science Degrees and Real-World Experience

    - by Steven Elliott Jr
    Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, "You absolutely MUST get a computer science degree in order to get a job as a programmer!" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration.

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • Can I trust the Basic schedule equation?

    - by Steve Campbell
    I've been reading Steve McConnell's demystifying the black art of estimating book, and he gives an equation for estimating nominal schedule based on Person-months of effort: ScheduleInMonths = 3.0 x EffortInMonths ^ (1/3) Per the book, this is very accurate (within 25%), although the 3.0 factor above varies depending on your organization (typically between 2 and 4). It is supposedly easy to use historical projects in your organization to derive an appropriate factor for your use. I am trying to reconcile the equation against Agile methods, using 2-6 week cycles which are often mini-projects that have a working deliverable at the end. If I have a team of 5 developers over 4 weeks (1 month), then EffortInMonths = 5 Person Months. The algorithm then outputs a schedule of 3.0 x 5^(1/3) = 5 months. 5 months is much more than 25% different than 1 month. If I lower the 3.0 factor to 0.6, then the algorthim works (outputs a schedule of approx 1 month). The lowest possible factor mentioned in the book through is 2.0. Whats going on here? I want to trust this equation for estimating a "traditional" non-agile project, but I cannot trust it when it does not reconcile with my (agile) experience. Can someone help me understand?

    Read the article

  • Not Happy With the Monochrome Visual Studio 11 Beta UI

    - by Ken Cox [MVP]
    I can’t wait for a third-party to come out with tools to return some colour to the flat, monochrome look of Visual Studio 11 (beta). What bugs me most are the icons. I feel like a newbie when I have to squint and analyze the shape of icons on the debugging toolbar just to get the one I want. (Fortunately, the meddlers didn’t mess with the keyboard commands so I’m not totally lost.) Not sure what usability studies told MS that bland is better. Maybe it is for most people, but not for me.  Gray, shades of gray and black. Ugh. And don’t get me started on the stupidity of using all-caps for window titles. Who approved that? I see that there’s a UserVoice poll on the topic (http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2623017-add-some-color-to-visual-studio-11-beta) but I doubt that anything will change Microsoft’s opinion in time for the release. Once a product gets to a stable beta, most non-crashing stuff gets pushed to the next version. I hope I’m proved wrong. Fortunately, Visual Studio is quite customizable. Unless ‘Bland’ is hard-coded, some registry tweaks and a collection of replacement icons should allow dissenters like me back to productivity. BTW, other than hating the UI, VS 11 beta is working quite well for me on a .NET 4 project.Note: Although my username for the ASP.NET domain includes the letters "[MVP]", I'm no longer an MVP. Apparently it's nearly impossible to change a username in the system. My apologies for the misleading identifier but I tried to have it changed without success.

    Read the article

  • Screen becomes black after pressing dash or alt-tab

    - by cegerxwin
    I did an upgrade from 11.04 to 11.10. Unity 3d becomes a black screen after pressing the dash-button or after pressing alt-tab to switch between open windows. I can see the panel on the top(lock,sound,..) and the panel on the left (launcher) but the rest is black. It looks like a maximised black window. The open Windows are active but I cant see them. I logout by pressing logout in the right top corner and pressing enter (because logout is default focused on the dialogue screen) and leave unity3d. Unity3d worked with 11.04 very good. If I press the dash button the dash looks like an 16-Bit or 8-Bit window and buttons for maximise, minimise and close are displayed and looks inverted. I have rebooted my notebook just now and log in to Unity 3D and tested some features of Unity and everything works well. The black thing is only a layer. I can use my desktop but cant see anything because of the layer, but everything works. It seems so, that a layer appear when pressing dash or alt-tab and does not disappear when close dash or choose a running app with alt-tab. you will see the necessary info related video problems: Unity support: /usr/lib/nux/unity_support_test -p OpenGL vendor string: X.Org R300 Project OpenGL renderer string: Gallium 0.4 on ATI RC410 OpenGL version string: 2.1 Mesa 7.11-devel Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes xorg glxinfo lspci -nn | grep VGA 01:05.0 VGA compatible controller [0300]: ATI Technologies Inc RC410 [Radeon Xpress 200M] [1002:5a62]

    Read the article

  • How to create a PPA for C++ program?

    - by piotr
    My questions are: c++/gtkmm project created with NetBeans. How to make package to PPA from this? I have created target files structure (*.desktop, iconfile, ui glade files). Binary goes to /opt/extras.ubuntu.com/myagenda/bin/myagenda. There is also a folder of glade files, that must go to /opt/extras.ubuntu.com/myagenda/bin/myagenda/ui. Desktop file goes to /usr/share/applications/myagenda.desktop. Icon goes to /usr/share/icons/hicolor/scalable/apps/myagenda.svg As you see, there is really small amount of files. Now, how to manage all this stuff, to create package on PPA, which knows where and how put this files to their targets? +-- opt ¦   +-- extras.ubuntu.com ¦   +-- myagenda ¦   +-- bin ¦   ¦   +-- myagenda ¦   +-- ui ¦   +-- item_btn_delete.png ¦   +-- item_btn_edit.png ¦   +-- myagenda.png ¦   +-- myagenda.svg ¦   +-- reminder.png ¦   +-- ui.glade +-- usr +-- share +-- applications ¦   +-- myagenda.desktop +-- icons +-- hicolor +-- scalable +-- apps +-- myagenda.svg Update: Created install file in debian directory with targets: data/myagenda /opt/extras.ubuntu/com/myagenda/bin data/ui/* /opt/extras.ubuntu/com/myagenda/ui data/myagenda.desktop /usr/share/applications data/myagenda.svg /usr/share/icons/hicolor/scalable/apps After dpkg-buildpackage it builds, but for amd64 architecture. Now, trying to change that to i386.

    Read the article

  • Windows 7 Machine Makes Router Drop -All- Wireless Connections [closed]

    - by Hammer Bro.
    Note: I accidentally originally posted this question over at SuperUser, and I still think the issue is caused by some low-level networking practice of Windows 7, but I think the expertise here would be more apt to figuring it out. Apologies for the cross-post. Some background: My home network consists of my Desktop, a two-month old Windows 7 (x64) machine which is online most frequently (N-spec), as well as three other Windows XP laptops (all G) that only connect every now and then (one for work, one for Netflix, and the other for infrequent regular laptop uses). I used to have a Belkin F5D8236-4 wireless router, and everything worked great. A week ago, however, I found out that the Belkin absolutely in no way would establish a VPN connection, something that has become important for work. So I bought a Netgear WNR3500v2/U/L. The wireless was acting a little sketchy at first for just the Windows 7 machine, but I thought it had something to do with 802.11N and I was in a hurry so I just fished up an ethernet cable and disabled the computer's wireless. It has now become apparent, though, that whenever the Windows 7 machine is connected to the router, all wireless connections become unstable. I was using my work laptop for a solid six hours today with no trouble, having multiple SSH connections open over VPN and streaming internet radio in the background. Then, within two minutes of turning on this Windows 7 box, I had lost all connectivity over the wireless. And I was two feet away from the router. The same sort of thing happens on all of the other laptops -- Netflix can be playing stuff all weekend, but if I come up here and do things on this (W7) computer, the streaming will be dead within ten minutes. So here are my basic observations: If the Windows 7 machine is off, then all connections will have a Signal Strength of Very Good or Excellent and a Speed of 48-54 Mbps for an indefinite amount of time. Shortly after the Windows 7 machine is turned on, all wireless connections will experience a consistent decline in Speed down to 1.0 Mbps, eventually losing their connection entirely. These machines will continue to maintain 70% signal strength, as observed by themselves and router. Once dropped, a wireless connection will have difficulty reconnecting. And, if a connection manages to become established, it will quickly drop off again. The Windows 7 machine itself will continue to function just fine if it's using a wired connection, although it will experience these same issues over the wireless. All of the drivers and firmwares are up to date, and this happened both with the stock Netgear firmware as well as the (current) DD-WRT. What I've tried: Making sure each computer is being assigned a distinct IP. (They are.) Disabling UPnP and Stateful Packet Inspection on the router. Disabling Network Sharing, SSDP Discovery, TCP/IP NetBios Helper and Computer Browser services on the Windows 7 machine. Disabling QoS Packet Scheduler, IPv6, and Link Layer Topology Discovery options on my ethernet controller (leaving only Client for Microsoft Networks, File and Printer Sharing, and IPv4 enabled). What I think: It seems awfully similar to the problems discussed in detail at http://social.msdn.microsoft.com/Forums/en/wsk/thread/1064e397-9d9b-4ae2-bc8e-c8798e591915 (which was both the most relevant and concrete information I could dig up on the internet). I still think that something the Windows 7 IP stack (or just Operating System itself) is doing is giving the router fits. However, I could be wrong, because I have two key differences. One is that most instances of this problem are reported as the entire router dying or restarting, and mine still works just fine over the wired connection. The other is that it's a new router, tested with both the factory firmware and the (I assume) well-maintained DD-WRT project. Even if Windows 7 is still secretly sending IPv6 packets or the TCP Window Scaling implementation that I hear Vista caused some trouble with (even though I've tried my best to disable anything fancy), this router should support those functions. I don't want to get a new or a replacement router unless someone can convince me that this is a defective unit. But the problem seems too specific and predictable by my instincts to be a hardware hiccup. And I don't want to deal with the inevitable problems that always seem to take half a day to resolve when getting a new router, since I'm frantically working (including tomorrow) to complete a project by next week's deadline. Plus, I think in the worst case scenario, I could keep this router connected directly to the modem, disable its wireless entirely, and connect the old Belkin to it directly. That should allow me to still use VPN (although I'll have to plug my work laptop directly into that router), and then maintain wireless connections for all of the other computers. But that feels so wrong to me. Anyone have any ideas what the cause and possible solution could be? Clarifications: The Windows 7 machine is directly connected via an ethernet cable to the router for everything above. But while it is online, all other computers' wireless connections become unusable. It is not an issue of signal strength or interference -- no other devices within scanning range are using Channel 1, and the problem will affect computers that are literally feet away from the router with 95% signal strength.

    Read the article

  • Where to set Visual studio 2013 property macros

    - by marcp
    I'm a new VS user. I've received some sample C++ projects working with a 3rd party API. They were saved in VS2012 format, but I have VS 2013. After conversion I find that there is an API specific macro defined in the project properties in the "Linker|General|Additional Library Directories" category. If I click on 'edit' I can replace the macro with an actual path, but how do I establish what the macro points to? In other words, how does one create a macro usable in multiple projects?

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • How to become an Oracle SOA expert?

    - by Jürgen Kress
    Our SOA business continues to grow – more and more open SOA jobs are posted – you want to participate and become and Oracle SOA expert? Follow the five steps to your SOA success: Joint the Oracle SOA Partner Community: You will receive our monthly newsletter with all the training, marketing and sales material and the latest product updates SOA Partner Community Forum: Keynotes delivered by our VP for product management and sales, breakouts by our experts and make sure you sign up for the hands-on bootcamps! The Forum is the best place to network with the Community & ACEs to exchange your project experience! Blog: Latest updates from the SOA Community on a weekly base for additional blogs please visit our wiki Books: What you must read when you start with Oracle SOA Suite! SOA & Application Grid Specialization eBook and the SOA & Application Grid Specialization Checklist to become an certified expert and partner! See you in Utrecht Jürgen Kress For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: Oracle SOA Expert,Oracle,SOA,SOA Community,SOA Partner Community,OPN,Jürgen Kress

    Read the article

  • Password Management for Oracle WebLogic customers

    - by Anthony Shorten
    One of the most common requests for enhancements I get across my desk is that customers wish to allow end users to change their passwords from our products. Now, typically password management is not in the realm of individual applications but it is an infrastructure requirement, so we don't usually add this to our roadmaps by default. The issue is that with the vast range of security stores that can be used with our product line across the Web Application Servers we support, it is almost impossible to come up with a generic enough API to work across them. If you have a specific security store on a specific Web Application Server platform then there are simpler solutions. There are a number of ways of implementing this without providing functionality specific functionality: Oracle sells Identity Management software that offers common API's to manage passwords. You can purchase those products and link to the password change dialog in those products using Navigation Keys. If you are a customer using Oracle WebLogic, then there is a sample JSP's that can be linked to provide this functionality under Oracle TechNet (registration required) under Code Samples (project S20). These can be added as a Navigation Key to complete the functionality. This will allow end users to manage their own passwords. Obviously these are all samples and should be treated as customizations when you implement them. If you wish to understand Navigation Keys, then look at the Oracle Utilities Application Framework Integration Guidelines (Doc Id: 789060.1) available from My Oracle Support.

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • How to dealing with the "programming blowhard"?

    - by Peter G.
    (Repost, I posted this in the wrong section before, sorry) So I'm sure everyone has run into this person at one point or another, someone catches wind of your project or idea and initially shows some interest. You get to talking about some of your methods and usually around this time they interject stating how you should use method X instead, or just use library Y. But not as a friendly suggestion, but bordering on a commandment. Often repeating the same advice over and over like a overzealous parrot. Personally, I like to reinvent the wheel when I'm learning, or even just for fun, even if it turns out worse than what's been done before. But this person apparently cannot fathom recreating ANY utility for such purposes, or possibly try something that doesn't strictly follow traditional OOP practices, and will settle for nothing except their sense of perfection, and thus naturally heave their criticism sludge down my ears full force. To top it off, they eventually start justifying their advice (retardation) by listing all the incredibly complex things they've coded single-handedly (usually along the lines of "trust me, I've made/used program X for a long time, blah blah blah"). Now, I'm far from being a programming master, I'm probably not even that good, and as such I value advice and critique, but I think advice/critique has a time and place. There is also a big difference between being helpful and being narcissistic. In the past I probably would have used a somewhat stronger George Carlin style dismissal, but I don't think burning bridges is the best approach anymore. Maybe I'm just an asshole, but do you have any advice on how to deal with this kind of verbal flogging?

    Read the article

  • What are the advantages and disadvantages to using your real name online?

    - by Jon Purdy
    As a programmer, do you see any professional or other advantage in using your real name in online discourse, versus an invented handle? I've always gone by a single username and had my real name displayed whenever possible, for a few reasons: My interests online are almost exclusively professional and aboveboard. It constructs a search-friendly public log of all of my work, everywhere. If someone wants to contact me, there are many ways to do it. My portfolio of work is all tied to me personally. Possible cons to full disclosure include: If you feel like becoming involved in something untoward, it could be harder. The psychopath who inherits your project can more easily find out where you live. You might be spammed by people who are not worth the precious time that could be better spent writing more of the brilliant software you're famous for. Your portfolio of work is all tied to you personally. It seems, anyway, that a vast majority of StackOverflow users go by invented handles rather than real names. Notable exceptions include the best-known users, who are typically well established in the industry. But how could we ever become legendary rockstar programmers if we didn't get our names out there? Discuss.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • OpenVZ vs Xen, how much difference in performance?

    - by Aleksandr Levchuk
    There is a Xen vs. KVM in performance question on ServerFault. What will be the performance difference if the choice is between Xen and OpenVZ? How is it best to measure? Some may say "you're comparing apples and oranges" but I have to choose one of the two and it needs to be wise choice. Performance is most important to us. We may switching to Xen from OpenVZ because Xen is more ubiquitous but only if performance difference is not significant. In January 2011 I'm thinking of doing a head to head performance comparison - here is my project proposal to our Bioinformatics facility director.

    Read the article

  • Please help with bounding box/sprite collision in darkBASIC pro

    - by user1601163
    So I just recently learned BASIC and figured I would try making a clone of pong on my own in darkBASIC pro, and I made everything else work just fine except for the part that makes the ball bounce off the paddle. And yes I'm aware that the game is not yet finished. The error is on lines 39-51 EVERYTHING IS 2D. /////////////////////////////////////////////////////////// // // Project: Pong // Created: Friday, August 31, 2012 // Code: Brandon Spaulding // Art: Brandon Spaulding // Made in CIS lab at CPAVTS // Pong art and code © Brandon Spaulding 2012-2013 // ////////////////////////////////////////////////////////// y=150 x=0 ay=150 ax=612 ballx=300 bally=200 ballx_DIR=1 bally_DIR=1 hide mouse set global collision on //objectnumber=10 //make object box objectnumber,5,150,0 do load image "media\paddle1.png",1 load image "media\paddle2.png",2 load image "media\ball.png",3 sprite 1,x,y,1 sprite 2,ax,ay,2 sprite 3,ballx,bally,3 if upkey()=1 then y = y - 4 if downkey()=1 then y = y + 4 //num_1 = sprite collision(1,0) //num_2 = sprite collision(2,0) num_3 = sprite collision(3,0) for t=1 to 2 //ball&paddle collision if num_3 > 0 if bally_DIR=1 bally_DIR=0 else bally_DIR=1 endif if ballx_DIR=0 ballx_DIR=1 else ballx_DIR=0 endif endif //if bally > 1 and bally < 500 then bally=bally + 2.5 if bally_DIR=1 bally=bally-2.5 if bally<-2.5 bally_DIR=0 endif else bally=bally+2.5 if bally>452.5 bally_DIR=1 endif endif if ballx_DIR=1 ballx=ballx-2.5 if ballx<-2.5 ballx_DIR=0 endif else ballx=ballx+2.5 if ballx>612 ballx_DIR=1 endif endif //bally = bally + t //if bally < 600 or bally > 1 then bally = bally - 2.5 //if ballx < 400 or ballx > 1 then ballx = ballx + 2.5 //move sprite 3,1 next t if escapekey()=1 then exit loop end Thank you in advance for the help.

    Read the article

  • Mac OSX application running but window is not visible

    - by S White
    greatful for any assistance as this is driving me nuts.. I'm running an application on my macbook pro, and since yesterday I can't see the application's main window. It's icon is showing in the doc and it's options are showing in the top menu bar on the desktop. The program is running normally in all other respects as far as I can tell (its an audio sampler which I am triggering via an external pedal, so I can tell its working). The window did show up once in Expose but now it is not showing up there either (no idea why). I've tried adding the application to every space in 'Spaces' and also removing the preferences file but neither of those helped. I have also reinstalled the application. I really need this back for a music project so any help would be massively appreciated, cheers!

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • Linux router and firewall with IP accounting

    - by Andrew
    I'm working on a project to replace my organisation's aging Slackware gateway/router/firewall machine in our colo rack. Previously we used rc.firewall but we are now looking for something more modern and easily configurable. The requirements are: Act as a gateway router & firewall Port forwarding to a Terminal Server in the colo IP/traffic accounting, preferably accessible via SNMP (already using cacti for other servers) Possibility of acting as a PPTP server & routing these connections Is not an out-of-the-box Cisco product (don't have the finances or support to maintain it) I'd prefer to use Ubuntu or some other Debian-based distro but something that integrates everything we're looking for is certainly an option if it offers all the desired features and is easy to configure. Is there a simple set of packages that will provide me with the Firewall & Accounting features, or am I best served with a custom-built distro / other solution?

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • To Do list for multiple users using MySQL, Need some advice regarding Projectwork?

    - by Steve
    i am thinking to create a To Do list sort of thing for my Project work. it's sort of a To do list type thing, meant for Multiple Users. i mean each user will have his Login and Password , to access into his account or profile and from there he/she can Manage his own To-Do list. it's has to be Kind of like Remember the Milk sort of. Every user will have his own To Do list. i mean now as Different users will have Different To Dos to perform and so different To Dos Lists, if for data fields like in tabular form. Task || Priority || Deadline || Number of days required ||Status. -----||----------||----------||-------------------------||-------------------- -----||----------||----------||-------------------------||-------------------- -----||----------||----------||-------------------------||-------------------- So what i meant to ask , Can this type of thing be done using MySQL as database and and any web based server side language PHP, ASP, JSP. i mean Can this be done through RDBMS like MySQL, for here different member users will have different to do lists than each others to keep and maintain.

    Read the article

  • Downtime scheduling - can it be automated?

    - by Nnn
    When servers need to have scheduled downtime at my workplace we follow a process roughly like the following: Propose time for work to take place on specific box/s Lookup list of stakeholders for specific box/s Seek approval from stakeholders (service owners/management etc) via email Incorporate changes to proposed time if necessary, repeat step 2 until.. Now everyone is happy with the time, send out a notification via email of the time, ask Staff who care about when the box is going down manually add it to their calendars some stakeholders the staff doing the work Do the actual work Is there an OSS project that we could use to automate this process? My googling has been fruitless so far. Will we need to build something ourselves? Would anyone else be interested in something like this?

    Read the article

  • Ubuntu issues when moving hard disk to new system

    - by Tim
    I'm working on a legacy project with a small single board computer running Ubuntu 10.04 on a compact flash card. I need to be able to save away a working image (via dd) and copy said image to other compact flash cards for use in other single board computers (with identical hardware) I'm able to copy the image to other flash cards and bootup on other systems no problem. But I'm seeing strange behavior. For instance, I can't use sudo on the new system (“sudo: must be setuid root”). I've gone down the path of trying to fix this, but have run into a slew of other issues. General question is: what do I need to be aware of when moving a hard disk containing Ubuntu (in my case a compact flash card) to another computer? I was hoping it would be seamless to Ubuntu since it's moving to a system with identical hardware. Is there something that needs to be done to make it "portable"?

    Read the article

  • Web Application: Combining View Layer Between PHP and Javascript-AJAX

    - by wlz
    I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model-View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: <label>Username</label> <input type="text" id="username" value="<?=$userData['username'];?>"><br /> <label>Date of birth</label> <input type="text" id="dob" value="<?=$userData['dob'];?>"><br /> <label>Address</label> <input type="text" id="address" value="<?=$userData['address'];?>"> Or pull them using AJAX: $.ajax({ type: "POST", url: config.indexURL + "user", dataType: "json", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client-side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?

    Read the article

< Previous Page | 845 846 847 848 849 850 851 852 853 854 855 856  | Next Page >