Search Results

Search found 1969 results on 79 pages for 'tom nail'.

Page 71/79 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Would Apple be able to tell if I'm running Mac OS inside a VM?

    - by Thomas Havlik
    Just as the question/title says. I understand that running Mac OS inside of a VM is against the EULA for the consumer version (but not the server, which is much more expensive!) If I were to purchase a legal copy of Mac OS, and install it to a VM, then register as an Apple Developer, would they shut me out? Is there a way they can tell the difference between emulated hardware and Apple computers? I'm slightly unfamiliar with how all of Apple's software works. Windows goes through this "genuine" test whenever installing service packs, but I don't know if Mac goes through the same trouble. Many thanks, -Tom

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

  • ????????Java????

    - by hamsun
    Breanne Cooley,2014? 2 ? 20 ? ??:Tom McGinn,Java?????? ??? ??(IoT) ?? ???????? ????,??????????? ????????????????????????????????????????????????,?2020?,??????????????????? ??????: ? ???????????. ? ????????? ? ????????? ? ?????????????? Oracle Java?????(Java ME)Embedded  Oracle Java ?????(Java ME)Embedded????????????????,????????????????,????????????????? ???????? ???????????????? Java ME Embedded: Develop Applications for Embedded Devices,??????????????? ????????????,??????????????????LED? ?? ??????????????? ???????????????? ?????????????,??????????(AMS)? ????????????????????????????? ???????????!

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

  • Love and Hate Outlook autocomplete, Outlook 2010/Exchange 2010

    - by Kay Sellenrode
    I think that almost every Exchange admin can concur with me that the Outlook autocomplete cache is one of those things you love but at the same time also hate. Users mostly love this function, except when it fails.Luckily since Outlook 2010 things got a little better and we got rid of the dreaded nk2 files.Outlook 2010 now includes a folder named "Suggested Contacts", all users you send an email to and that don't already have an contact object are saved in this suggested contacts folder.A lot of people thought this folder is also the source for the autocomplete cache, which would make it somewhat easy to manage, I wish the solution was that easy.Badly enough separate from the suggested contacts, outlook still maintains a cache for the autocomplete function. Let us say you run in to the following situation: John works for company A and is a popular contact for almost everyone in your organization.Now John quit his job at Company A and moved to Company B.Luckily John maintains your company as customer, but his email address is now changed from companyA.com to companyB.comSince you don't want to do any business with Company A anymore, you want to make sure none of your users accidentally mail to his old address.Now this is where the real fun starts, cause almost all of your 1000 users have mailed at least once with John.Resulting in the fact that every user has John most probably listed in their autocomplete cache.  I have run into sort like situations multiple times with several customers, which is always a pain.And of course this blog post is the result of one of those issues once again.I knew that with the Suggested contacts we could do more than previously, but still never spent time on it before.But today I thought lets nail this now and forever!!  Ok let's start of that things are different for every combination of outlook and exchange.I explain the procedure for Exchange 2010 SP1+ in combination with Outlook 2010.At first we want to get rid of all contact objects that contain [email protected] do this we need to be assigned to the RBAC role "Mailbox Import Export", which can be done through the Exchange Control panel.In my test environment I assigned this role to the Organization admins, but in real life you might want to add it to a custom role. Open the Exchange control panel by logging in to the ecp url, in my case https://ITFEX.itf.local/ECP, and make sure you selected your organization as management scope.Browse to Roles & Auditing, and open the properties for the organization management role group.click on the Add button to add a new role to the Organization Management role group, select the Mailbox Import Export role and click on add and OK to add it to the role.  Once you have assigned that role to your account you can open the Exchange Management Shell and execute the following command: Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –logonly –searchquery "kind:contact AND [email protected]" This command will create a list with all mailboxes and any contacts that were found with an email address that contains [email protected], this list is then posted in the mailbox you specified at your.account in the folder searchanddelete.Now examine the report that was created and posted in the mailbox to see if it matches what you think it should match.My results looked like this:  When you're confident that the search includes all references and no false positives you can execute almost the same command, but this time with an delete action instead of the logonly. Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –DeleteContent –searchquery "kind:contact AND [email protected]" Now most people would think this would remove the contact object from the suggested contacts, resulting in a removal from the autocomplete list.Sad but not true, to clean up the autocomplete list start Outlook with the command: "outlook /cleanautocompletecache" This will result in an empty cache, but luckily this is rebuild based on the suggested contacts, which now doesn't include the [email protected] contact anymore.

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • ASP.NET MVC, Url Routing: Maximum Path (URL) Length

    - by Martin Aatmaa
    The Scenario I have an application where we took the good old query string URL structure: ?x=1&y=2&z=3&a=4&b=5&c=6 and changed it into a path structure: /x/1/y/2/z/3/a/4/b/5/c/6 We're using ASP.NET MVC and (naturally) ASP.NET routing. The Problem The problem is that our parameters are dynamic, and there is (theoretically) no limit to the amount of parameters that we need to accommodate for. This is all fine until we got hit by the following train: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. IIS would throw this error when our URL got past a certain length. The Nitty Gritty Here's what we found out: This is not an IIS problem IIS does have a max path length limit, but the above error is not this. Learn dot iis dot net How to Use Request Filtering Section "Filter Based on Request Limits" If the path was too long for IIS, it would throw a 404.14, not a 400.0. Besides, the IIS max path (and query) length are configurable: <requestLimits maxAllowedContentLength="30000000" maxUrl="260" maxQueryString="25" /> This is an ASP.NET Problem After some poking around: IIS Forums Thread: ASP.NET 2.0 maximum URL length? http://forums.iis.net/t/1105360.aspx it turns out that this is an ASP.NET (well, .NET really) problem. The shit of the matter is that, as far as I can tell, ASP.NET cannot handle paths longer than 260 characters. The nail in the coffin in that this is confirmed by Phil the Haack himself: Stack Overflow ASP.NET url MAX_PATH limit Question ID 265251 The Question So what's the question? The question is, how big of a limitation is this? For my app, it's a deal killer. For most apps, it's probably a non-issue. What about disclosure? No where where ASP.NET Routing is mentioned have I ever heard a peep about this limitation. The fact that ASP.NET MVC uses ASP.NET routing makes the impact of this even bigger. What do you think?

    Read the article

  • number->string and related procedures in GIMP scheme scripting

    - by dim fish
    I am frustrated with string-to-number and number-to-string conversion in GIMP scripting. I am runnning GIMP 2.6.8 in Windows Vista. I understand that GIMP's internal Scheme implementation changes over the versions and I can't seem to nail down the documentation. From what I can gather GIMP's Scheme is a subset of TinyScheme and/or supports the R5RS standard procedures. In any case, I usually just look in the packaged script directory for examples when I want to try something new, because that should work for sure, right? For example, grid-system.scm comes with the latest GIMP release and has the expression, (string-append (number->string obj) " ") which is exactly what I want. However, if I use number-string in my own script, or even type it into GIMP's script console (which is how I usually test out new stuff I want to do) it tells me number-string is an unbound variable: > (number->string 3) Error: eval: unbound variable: number->string Other standard procedures from, say R5RS, work just fine: > (string-append "frust" "rated") "frustrated" So, 1) Is there some lurking documentation for current GIMP Scheme scripting other than something drastic like searching GIMP's source code? 2) Can I use the GIMP console to spit out a list of all defined procedures to find something I need? 3) Anyone else confirm that number-string is not defined for the current Windows build, even though it appears in the packaged scripts? My web searches haven't turned up any related problems, and a complete uninstall of all GIMP versions, back to latest puts me in the same scrape.

    Read the article

  • Silverlight 4 seems like starving of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes up to 1.5 GB. If the process won't take that much the layout is rendered properly even though the memory usage is still extremely high. Otherwise everything crashes due to out of memory exception. Running the same application compiled in SL3, the memory used is about 200MB when all the usercontrols are loaded. Time spent to load the application in SL3 is about 10 seconds, while it takes up to 3 mins in SL4 There are no transparencies, no opacities set, no effects and animations in the layout. User controls are instantied on the fly and added or removed in the visual tree on purpose when the user switches from one screen to another. The resources are all cleaned properly when a usercontrol is removed from the visual tree to allow the GC to operate in the background. I may do something wrong but I could not figure out where exactly nail out the source of this problem. As far as I know there is no memory profiler in SL4 that can help me out to find where to look at. But again I could not be updated on new debugging tools available.

    Read the article

  • onfocus="this.blur();" problem

    - by carpenter
    // I am trying to apply an "onfocus="this.blur();"" so as to remove the dotted border lines around pics that are being clicked-on // the effect should be applied to all thumb-nail links/a-tags within a div.. // sudo code (where I am): $(".box a").focus( // so as to effect only a tags within divs of class=box | mousedown vs. onfocus vs. *** ?? | javascript/jquery... ??? function () { var num = $(this).attr('id').replace('link_no', ''); alert("Link no. " + num + " was clicked on, but I would like an onfocus=\"this.blur();\" effect to work here instead of the alert..."); // sudo bits of code that I'm after: // $('#link_no' + num).blur(); // $(this).blur(); // $(this).onfocus = function () { this.blur(); }; } ); // the below works for me in firefox and ie also, but I would like it to effect only a tags within my div with class="box" function blurAnchors2() { if (document.getElementsByTagName) { var a = document.getElementsByTagName("a"); for (var i = 0; i < a.length; i++) { a[i].onfocus = function () { this.blur(); }; } } }

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Factorial function - design and test.

    - by lukas
    I'm trying to nail down some interview questions, so I stared with a simple one. Design the factorial function. This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class. public static class MathHelper { public static int Factorial(int n) { Debug.Assert(n >= 0); if (n < 0) { throw new ArgumentException("n cannot be lower that 0"); } Debug.Assert(n <= 12); if (n > 12) { throw new OverflowException("Overflow occurs above 12 factorial"); } //by definition if (n == 0) { return 1; } int factorialOfN = 1; for (int i = 1; i <= n; ++i) { //checked //{ factorialOfN *= i; //} } return factorialOfN; } } Testing: [TestMethod] [ExpectedException(typeof(OverflowException))] public void Overflow() { int temp = FactorialHelper.MathHelper.Factorial(40); } [TestMethod] public void ZeroTest() { int factorialOfZero = FactorialHelper.MathHelper.Factorial(0); Assert.AreEqual(1, factorialOfZero); } [TestMethod] public void FactorialOf5() { int factOf5 = FactorialHelper.MathHelper.Factorial(5); Assert.AreEqual(120,factOf5); } [TestMethod] [ExpectedException(typeof(ArgumentException))] public void NegativeTest() { int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5); } I have a few questions: Is it correct? (I hope so ;) ) Does it throw right exceptions? Should I use checked context or this trick ( n 12 ) is ok? Is it better to use uint istead of checking for negative values? Future improving: Overload for long, decimal, BigInteger or maybe generic method? Thank you

    Read the article

  • Order words by number of letters, then place words neatly

    - by bmaster
    I have a list of words in javascript similar to this: var words = ["mine", "minute", "mist", "mixed", "money", "monkey", "month", "moon", "morning", "mother", "motion", "mountain", "mouth", "move", "much", "muscle", "music", "nail", "name", "narrow", "nation", "natural", "near", "necessary", "neck", "need", "needle", "nerve", "net", "new", "news", "night"]; The words can be 1-25? letters long. I have a div id="words", with a set width of 700px (but I might change it from this). Using css/javascript/jquery, how can I make it: Order the words by number of letters Place the words inside the div tag, left to right, but so that there are no gaps at the right edge of the words div, and there is even spacing between words on a line. Each word should have a border around it and a background. Like this: |reallylongwordssdf shorterwordfdf dfsdfsdfsdf sdfsdfsdf| |sdfsdfsdf sdffsdop sdfjpogs sdfsds dfsdsd dfsdsd dfsdsd| I really have no idea where to begin with this. Perhaps I could manage to write code to order the words by number of letters, but after that, I'd be stuck. Edit: I forgot to add, the words must be links.

    Read the article

  • [Concept] How does unlink() find the file to delete?

    - by Prasad
    My app has a 'Photo' field to store URL. It uses sfWidgetFormInputFileEditable for the widget schema. To delete the old image when a new image is uploaded, I use unlink before setting the value in the over-ridden setter and it works!!! if (file_exists($this->_get('photo'))) unlink($this->_get('photo')); Photos are stored in uploads/photos and when saving 'Photo' only the file name xxx-yyy.zzz is saved (and not the full path). However, I wish to know how symfony/php knows the full path of the file to be deleted? Part 2: I am using sfThumbnailPlugin to generate thumbnails. So the actual code looks like this: public function setPhoto($value) { if(!empty($value)) { Contact::generateThumbnail($value); // delete current Photo & create thumbnail $this->_set('photo',$value); // setting new value after deleting old one } } public function generateThumbnail($value) { $uploadDir = sfConfig::get('app_photo_upload'); // path to upload folder if (file_exists($this->_get('photo'))) { unlink($this->_get('photo')); // delete full-size image // path to thumbnail $thumbpath = $uploadDir.'/thumbnails/'.$this->get('photo'); // read a blog, tried setting dir manually, doesn't work :( //chdir('/thumbnails/'); // tried closing the file too, doesn't work! :( //fclose($thumbpath) or die("can't close file"); //unlink($this->_get('photo')); // doesn't work; no error :( unlink($thumbpath); // doesn't work, no error :( } $thumbnail = new sfThumbnail(150, 150); $thumbnail->loadFile($uploadDir.'/'.$value); $thumbnail->save($uploadDir.'/thumbnails/'.$value, 'image/png'); } Why can't the thumbnail be deleted using unlink()? is the sequence of ops incorrect? Is it because the old thumbnail is displayed in the sfWidgetFormInputFileEditable widget? I've spent hours trying to figure this out, but unable to nail down the real cause. Thanks in advance.

    Read the article

  • The perverse hangman problem

    - by Shalmanese
    Perverse Hangman is a game played much like regular Hangman with one important difference: The winning word is determined dynamically by the house depending on what letters have been guessed. For example, say you have the board _ A I L and 12 remaining guesses. Because there are 13 different words ending in AIL (bail, fail, hail, jail, kail, mail, nail, pail, rail, sail, tail, vail, wail) the house is guaranteed to win because no matter what 12 letters you guess, the house will claim the chosen word was the one you didn't guess. However, if the board was _ I L M, you have cornered the house as FILM is the only word that ends in ILM. The challenge is: Given a dictionary, a word length & the number of allowed guesses, come up with an algorithm that either: a) proves that the player always wins by outputting a decision tree for the player that corners the house no matter what b) proves the house always wins by outputting a decision tree for the house that allows the house to escape no matter what. As a toy example, consider the dictionary: bat bar car If you are allowed 3 wrong guesses, the player wins with the following tree: Guess B NO -> Guess C, Guess A, Guess R, WIN YES-> Guess T NO -> Guess A, Guess R, WIN YES-> Guess A, WIN

    Read the article

  • PHP form validation function

    - by Barbs
    I am currently writing some PHP form validation (I have already validated clientside) and have some repetitive code that I think would work well in a nice little PHP function. However I have having trouble getting it to work. I'm sure it's just a matter of syntax but I just can't nail it down. Any help appreciated. //Validate phone number field to ensure 8 digits, no spaces. if(0 === preg_match("/^[0-9]{8}$/",$_POST['Phone']) { $errors['Phone'] = "Incorrect format for 'Phone'"; } if(!$errors) { //Do some stuff here.... } I found that I was writing the validation code a lot and I could save some time and some lines of code by creating a function. //Validate Function function validate($regex,$index,$message) { if(0 === preg_match($regex,$_POST[$index],$message) { $errors[$index] = $message; } And call it like so.... validate("/^[0-9]{8}$/","Phone","Incorrect format for Phone"); Can anyone see why this wouldn't work? Note I have disabled the client side validation while I work on this to try to trigger the error, so the value I am sending for 'Phone' is invalid.

    Read the article

  • ExtJS: Ext.data.DataReader: #realize was called with invalid remote-data

    - by TomH
    I'm receiving a "Ext.data.DataReader: #realize was called with invalid remote-data" error when I create a new record via a POST request. Although similar to the discussion at this SO conversation, my situation is slightly different: My server returns the pk of the new record and additional information that is to be associated with the new record in the grid. My server returns the following: {'success':true,'message':'Created Quote','data': [{'id':'610'}, {'quoteNumber':'1'}]} Where id is the PK for the record in the mysql database. quoteNumber is a db generated value that needs to be added to the created record. Other relevant bits: var quoteRecord = Ext.data.Record.create([{name:'id', type:'int'},{name:'quoteNumber', type:'int'},{name:'slideID'}, {name:'speaker'},{name:'quote'}, {name:'metadataID'}, {name:'priorityID'}]); var quoteWriter = new Ext.data.JsonWriter({ writeAllFields:false, encode:true }); var quoteReader = new Ext.data.JsonReader({id:'id', root:'data',totalProperty: 'totalitems', successProperty: 'success',messageProperty: 'message',idProperty:'id'}, quoteRecord); I'm stumped. Anyone?? thanks tom

    Read the article

  • Creating meaningful add URLs in Cakephp

    - by Loftx
    Hi there, For my site I have a number of Orders each of which contains a number of Quotes. A quote is always tied to an individual order, so in the quotes controller I add a quote with reference to it's order: function add($orderId) { // funtion here } And the calling URL looks a bit like http://www.example.com/quotes/add/1 It occurred to me the URLs would make more sense if they looked a bit more like http://www.example.com/orders/1/quotes/add As the quote is being added to order 1. Is this something it's possible to achive in CakePHP? Cheers, Tom

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >