Search Results

Search found 625 results on 25 pages for 'deals'.

Page 10/25 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How would I put together a site requiring several TB? [closed]

    - by acidzombie24
    Lets say I have a site with unmetered 100MBPS bandwidth (i assume its bits?) and the ram i require. Most plans i see offer HDD that hold 250gb and 1TB. But what happens if i compile/generate enough data that i require 10tb or 25tb? (I'd likely have two servers but...) I wouldn't be serving all of that data (well not to the public) so CDN wouldn't make sense. What do i do in this scenario? Do I need to get a custom plan from a hosting provider? (if so how do i find them?) Are there services that allow me to mount remote drives (that sounds wrong unless its a CDN so maybe not). Are there host that deals specifically with unmetered bandwidth and provides lots of disk space? Math says ~1TB is the most i'll ever need but if i happen to need more i'd like to know my options.

    Read the article

  • How to get multiple open-source projects to use a standard way of doing something.

    - by Marco
    Problem In the last couple weeks, I've used 3 different "repository" tools (listed in alphabetical order): gradle ivy maven I'm calling them "repository" tools because I've also used sbt -- which fortunately uses ivy to manage it's cache or local repository. Each of these tools will create it's own repository. The defaults are: ~/.m2/repository for maven ~/.gradle/cache ~/.ivy2/cache Why can't they all use the same cache? Goal I'd like to change the world so that all three build tools could use the same cache. I'm looking for advice about issues I'm likely to run into and smart ways to get around them. By "use the same cache", I do not mean "retrieve from another build tool's cache". I mean "retrieve from and store in another build tool's cache". While I could go ahead and submit issues to the three projects, I know from experience (as a developer on an open source project), that if you want something done, you're best off getting it done yourself. Also, it seems like I need to get all 3 communities on board to some degree. What is the recommended approach for getting this kind of thing done? How do I approach the different communities? Do I work on patches for the 3 different projects, or would it be better off to create my own "interface" project that deals with these issues and have the 3 tools interface with that? Is this a standards question that I need to address on that front? Lastly, if I'm missing something and this is possible (in an globally configurable fashion), then please let me know.

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 50% off ASP.NET hosting

    - by Fabrice Marguerie
    I haven't blogged for a long time because I'm busy working on an exciting new project. It's too early to tell you more. I'll provide details in a few months. Meanwhile, I wanted to write a quick post to share an excellent offer with you. It's that time of the year when you can get deals on many things, including web hosting. I'd like to remind you about Arvixe, a great web hosting provider for Windows (for ASP.NET) and Linux. For 48 hours, between Thursday November 24th at 00:00 PST (08:00 GMT/UTC) and Friday November 25th, Arvixe will be offering 50% off all of their shared hosting products. This will be for all new accounts, for life (as long as you continue to renew the account)!I've been using Arvixe for my websites for more than one year and a half now, and I highly recommend them. Here is an overview of what I get for a very good price:Unlimited diskspaceUnlimited data transferUnlimited domainsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL databases.NET 1.1, 2, 3.5 and 4Dedicated application poolsFull trustIIS 7Daily backupsand more... And now, you can get that too for half the price. Just go to Arvixe.com and secure your own hosting account now by using the coupon code "Black Friday" during checkout.Disclaimer: the links to Arvixe are affiliate links that may bring me some money home if you sign up. Still, I recommend Arvixe because I use them and I'm very happy with what they offer.

    Read the article

  • What music player for Ubuntu is *most* like Cog?

    - by mattshepherd
    Cog is, far and away, my favourite music player. File browser on the left panel, pull a folder into the right panel, it plays the files you pull into the main box. Simple! Easy! Nice! Especially since I store all my music on my non-primary drive. Cog doesn't care where I keep my music! It just deals with whatever folder I tell it to monitor. I also have about 200 gigs of music. It's a problem, I know. I can't find a player this simple and elegant in Ubuntu. Amarok: doesn't handle non-primary drives very well. Banshee: kinda good, but fussy browsing/playlist systems. Audacious: no file browser; it also crashes whenever I try to "import" all my music. All I want -- seriously -- is something where I can look at my MP3 folder, drag a folder onto a playlist, and have that playlist play the songs.

    Read the article

  • ACT On Marketing Campaign “Middleware Consolidation and Innovation Program”

    - by JuergenKress
     You want marketing budget to run joint Oracle Fusion Middleware 12 c events? Participate in the OFM ACTon Campaign. The opportunity for you as a partners is to : Create larger deals by reselling software and systems e.g. WebLogic on ODA, SOA on ODA, Exalogic for AppAdvantage Create more service revenue at our existing customer, by consolidation and migration of application servers platforms. Extend and innovate platforms e.g. mobile integration big data or business process automation Create service business at new customers, more than 120.000 customers use middleware today! The objective of the initiative is to run joint events for our middleware customers and Generate re-sell middleware license revenue in the broad market Generate Service revenue for partners Prepare partners to understand upgrade and upsell opportunities to Oracle Fusion Middleware 12c WebLogic Community Workspace (WebLogic Community membership required) you can learn details about the campaign: OFM ACTon event Brief & Middleware Consolidation and Innovation_Act-On Program_Salesplay & Campaign kit DRAFT. Interested and want to participate? Contact your local Value Added Distributer and he will work with you on a joint campaign plan! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: marketing,ACton,Campaign,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • South Florida Stony Brook Alumni &amp; Friends Reception 2011

    - by Sam Abraham
    It’s official, we are kicking off a local South Florida Chapter for Stony Brook alumni and friends in the area to keep in touch.  Our first networking event will be taking place at Champps, Ft Lauderdale on November 17th, 6:00-8:00 PM. Admission is free and open for everyone, whether or not they are Stony Brook Alums. The team at Champps is offering us great specials (Happy hour deals, half-price appetizers,etc.) that we can choose to enjoy while we network and catch up. (Event Announcement: http://alumniandfriends.stonybrook.edu/page.aspx?pid=299&cid=1&ceid=171&cerid=0&cdt=11%2f17%2f2011) I look forward to share and revive my college experience which I believe was the starting line of my ongoing life journey. It would be also great to hear others’ take as they reflect on their experiences throughout their college years. I invite anyone interested in keeping in touch with friends and alums of Stony Brook to join our LinkedIn or Facebook groups.   The Stony Brook Alumni Association – South Florida Chapter LinkedIn Group: http://www.linkedin.com/groups?gid=3665306&trk=myg_ugrp_ovr The Stony Brook Alumni Association – South Florida Chapter Facebook Group: http://www.facebook.com/#!/groups/114760941910314/

    Read the article

  • Personalized Pricing

    - by David Dorf
    In past postings I've spent a fair amount of time talking about targeted promotions.  Using a complete view of the customer that includes purchase history, location history, and psychographics gleaned from social media, we can select the offer with the greatest chance of redemption.  This is done to influence shopping behavior, which might be introducing the consumer to a new product line, increasing their basket size, increasing frequency of purchases, etc. Safeway seems to be taking a slightly different approach with their personalized pricing.  In additional to offering electronic coupons and club card offers, they are also providing a personalized price for certain items based on purchase history.  So when Sally want to shop at Safeway, she first checks the "Just for U" website for three types of deals.  She starts by selecting manufacturer coupons to load into her loyalty card, then she checks the Club Card for offers like "buy one get one free." The third step is the interesting one.  Safeway will set a particular lower price for Sally good for 90 days on items she buys often.  Clearly this isn't enforcing a new behavior but rather instilling loyalty.  I would love to know exactly how they are determining the personalized price.  Of course bargain hunters can still stack the three offers so they can, for example, get their $4.99 Oatmeal for $0.72. I like this particular question and answer from their website's FAQ: My offers are not that great. Can I tell you what offers I need? That's a good idea. That functionality is not currently available, but we appreciate your input and are constantly improving our just for U program. Stay tuned for exciting enhancements! I suppose if Safeway is tracking all the purchases, they can easily determine whether the customer if profitable.  As long as the customer stays profitable, why not let them determine a few offers themselves?  Food for thought.

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • My KDE very slow in certain operations

    - by Pietro
    I have a problem with my Linux installation. It seems that the KDE code that deals with directory windows is extremely slow (on both Dolphin and Konqueror). This happens both when I click on a directory icon and when I want to open/save a file from many KDE applications. The time the window takes to open can be one minute or more. The same happens when I right click on an icon. Looking at the CPU usage, this is very low (less than 10%). Am I the only one with this problem, or is it well known and maybe already fixed? Consider that I cannot update to a more recent version of OpenSuse. Thank you, Pietro Configuration: Linux version: OpenSuse 11.4 KDE 4.6.0 System: DELL Precision T3500 - Intel Xeon Home directory mounted on a remote drive. <-- could this be the reason?

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • Reboot loop after Windows XP Service Pack 3 update

    - by espais
    Recently I upgraded to Service Pack 3, and now it seems that something has gone terribly wrong with the update. After logging in, my computer will blue screen after about 5 minutes and then go into a reboot loop (I don't have the exact error message handy). I have a Sager NP2092 notebook, running an Intel chipset. I'd rather avoid having to reformat my XP, especially with my copy of Windows 7 arriving right around the corner. After doing some Googling, I came across this article: Does your AMD-based computer boot after installing XP SP3? However, it deals with the AMD chips, and specifically states not to use its fix on Intel based systems. EDIT After killing the reboot, this is the error that pops up: STOP: 0x000000F4 (0x00000003, 0x8A187118, 0x8A18728C, 0x80604438) EDIT2 I have run Memtest86, and it reported 0 errors.

    Read the article

  • As my first professional position should I take it at a start-up or a better known company? [closed]

    - by Carl Carlson
    I am a couple of months removed from graduating with a CS degree and my gpa wasn't very high. But I do have aspirations of becoming a good software developer. Nevertheless I got two job offers recently. One is with a small start-up and the other is with a military contractor. The military contractor asked for my gpa and I gave it to them. The military contracting position is in developing GIS related applications which I was familiar with in an internship. After receiving an offer from the military contractor, I received an offer from the start-up after the start-up asked me how much the offer was from the military contractor. So the pay is even. The start-up would require I be immediately thrust into it with only two other people in the start-up currently and I would have to learn everything on my own. The military contractor has teams and people who know what their doing and would be able to offer me guidance. Seeing as how I have been a couple of months removed from school and need something of a refresher is it better than I just dive into the start-up and diversify what I've learned or be specialized on a particular track? Some more facts about the start-up: It deals with military contracts as well and is in Phase 2 of contracts. It will require I learn a diverse amount of technologies including cyber security, android development, python, javascript, etc. The military contractor will have me learn more C#, refine my Java, do javascript, and GIS related technologies. I might as well come out and say the military contractor is Northrop Grumman and more or less offered me less money than the projected starting salary from online salary calculators. But there is the possibility of bonuses, while the start-up doesn't include the possibility of bonuses. I think benefits for both are relatively the same.

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Very large log files, what should I do?

    - by Masroor
    (This question deals with a similar issue, but it talks about a rotated log file.) Today I got a system message regarding very low /var space. As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement. Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives, total 28G -rw-r----- 1 syslog adm 14G Aug 23 21:56 kern.log -rw-r----- 1 syslog adm 14G Aug 23 21:56 syslog -rw-rw-r-- 1 root utmp 390K Aug 23 21:47 wtmp -rw-r--r-- 1 root root 287K Aug 23 21:42 dpkg.log -rw-rw-r-- 1 root utmp 287K Aug 23 20:43 lastlog As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated. So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps? I am using Ubuntu 14.04.

    Read the article

  • JavaScript 3D space ship rotation

    - by user36202
    I am working with a fairly low-level JavaScript 3D API (not Three.js) which uses euler angles for rotation. In most cases, euler angles work quite well for doing things like aligning buildings, operating a hovercraft, or looking around in the first-person. However, in space there is no up or down. I want to control the ship's roll, pitch, and yaw. To do that, some people would use a local coordinate system or a permenant matrix or quaternion or whatever to represent the ship's angle. However, since I am not most people and am using a library that deals exclusively in euler angles, I will be using relative angles to represent how to rotate the ship in space and getting the resulting non-relative euler angles. For you math nerds, that means I need to convert 3 euler angles (with Y being the vertical axis, X representing the pitch, and Z representing a roll which is unaffected by the other angles, left-handed system) into a 3x3 orientation matrix, do something fancy with the matrix, and convert it back into the 3 euler angles. Euler to matrix to euler. Somebody has posted something similar to this on SO (http://stackoverflow.com/questions/1217775/rotating-a-spaceship-model-for-a-space-simulator-game) but he ended up just working with a matrix. This will not do for me. Also, I am using JavaScript, not C++. What I want essentially are the functions do_roll, do_pitch, and do_yaw which only take in and put out euler angles. Many thanks.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Accessing Secure Web Services from ADF Mobile

    - by Shay Shmeltzer
    Most of the enterprise Web services you'll access are going to be secured - meaning they'll require you to pass a user/password in order to get to their data.  If you never created a secured Web service, it's simple in JDeveloper! For the below video I just right clicked on a Java class that I exposed as a Web service, and chose  "Web Service Properties" and then checked the "oracle/wss_username_token_service_policy" box from the list of options (that's the option supported by ADF Mobile right now): In the demo below we are going to use a "remote" login server that does the authentication of the user/pass.The easiest way to "create" a remote login server is to create a "regular" web ADF application, secure it, and deploy it on a server. The secured ADF application can just require ADF Authentication with a simple HTTP Basic Authentication - basically the next two images in the Application->Secure->Configure ADF Security menu wizard. ok - so now you have a secured ADF application - deploy it on a server and get the URL for that application.  From this point on you'll see the process in the video which deals with the configuration of your ADF Mobile app. First you'll need to enable security for your ADF mobile application, so it will prompt users to provide a user/pass combination. You'll also need to configure security on specific features. And you can have them use remote login pointing to your regular secured ADF application. Next define your Web service data control. Right click on the web service data control to "define Web Service Security". You'll also need to define the adfCredentialStoreKey property for the Web Service data control in the connections.xml file. This should be it. Here is the flow: If you haven't already - you can read more about this in the Mobile developer guide, and Andrejus has a sample for you.

    Read the article

  • How do I get away from PHP in the web industry?

    - by Kurtis
    I'm just looking for some tips on getting away from using PHP for web development. I'm self-employed but it seems like all of the work I find deals with PHP. I'm not complaining about the work -- just the poor choice of a language that is incredibly popular. I'd love to do my web development in Python, Perl, C#, or even a fun and fancy functional language. There's the old saying that you don't tell a carpenter what kind of a hammer to use. At the same time, you do tell them what kind of material to build your house out of and how much you're willing to spend. The problem I am running in to is that I don't know how to get out of this spiral. I can't just turn down work because then I wouldn't have any. I really don't want to go work for another company -- and even if I did, I'd probably still be stuck using something I don't enjoy. I'm hoping someone has "been there" before and might have some good ideas on how to get out of this situation. Thanks!

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >