Search Results

Search found 4640 results on 186 pages for 'numerical integration'.

Page 19/186 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • UIDocumentInteractionController in iOS for Whatsapp Integration

    - by Smitha
    I was looking for a way to share my application created files through Whatsapp. I found we can chat using custom URL scheme provided by whatsapp : https://www.whatsapp.com/faq/iphone/23559013. But, to share files of my app, read that I have to use DicumentIntercationController. So, is there any smaple available? Also, when I read about apple docs, it says, "Create an instance of the UIDocumentInteractionController class for each file you want to interact with" (https://developer.apple.com/library/ios/documentation/FileManagement/Conceptual/DocumentInteraction_TopicsForIOS/Articles/PreviewingandOpeningItems.html#//apple_ref/doc/uid/TP40010410-SW1). So is it necessary to create instance for each file? I just want to share the files as and when i receive it through my service to my app. Thanks for the help in advance.

    Read the article

  • Twitter integration

    - by qaisjp
    My computer game is powered using Love2d in Lua, there is dead space in the menu of my game and I'd like to fill it up with something. So I'll like to put a twitter feed there, how can I receive all the twitter posts created by AND mentioned from @stickydestroyer; how can I make it look good and code the actual thing. I know I have to use some sort of cURL module, but how can I get the feed AND make it looking nicely?

    Read the article

  • Libreoffice theme integration not working (Xubuntu)

    - by Treepata
    I am running Xubuntu 11.10 and everything works nicely. I have selected a nice Xfce theme and everything looks beautiful, except Libreoffice. While other programmes (Gimp, Inkscape, Thunar etc.) integrate with the selected Appearance & Window Manager theme, Libreoffice looks like a Windows 98 programme. Does anyone know how to fix this? I have already tried this solution, without success: http://ubuntuforums.org/showthread.php?t=1584010&page=2 Thanks for your help!

    Read the article

  • How to manage end user documentation for a project under continuous integration?

    - by mcdon
    I have a project under continuous integration and would like to add end user documentation to the project. The end user documentation is a user manual, not API documentation. In our environment we use windows, c#, msbuild, cruisecontrol.net and subversion. We are currently using DocToHelp to create our help file, which is based on an msword document. I'm looking for some guidance on how to manage the end user documentation. What documentation tools should I use? Should any of the documentation tools be part of the build script? Should the output files from the documentation tool be stored in subversion? What type of help files would be best to use?

    Read the article

  • How to leverage Spring Integration in a real-world JMS distributed architecture?

    - by ngeek
    For the following scenario I am looking for your advices and tips on best practices: In a distributed (mainly Java-based) system with: many (different) client applications (web-app, command-line tools, REST API) a central JMS message broker (currently in favor of using ActiveMQ) multiple stand-alone processing nodes (running on multiple remote machines, computing expensive operations of different types as specified by the JMS message payload) How would one best apply the JMS support provided by the Spring Integration framework to decouple the clients from the worker nodes? When reading through the reference documentation and some very first experiments it looks like the configuration of an JMS inbound adapter inherently require to use a subscriber, which in a decoupled scenario does not exist. Small side note: communication should happen via JMS text messages (using a JSON data structure for future extensibility).

    Read the article

  • Why hasn't anybody started a hosted continuous integration service?

    - by Teflon Ted
    There's a dozen services that provide hosted version control, hosted ticket tracking, hosted project management, and combinations of all of the above, there's even hosted web-based IDEs. But nobody's yet offered a hosted continuous integration service; at least that I can find. The concept seems simple enough: I register and provide the URL to my source code repository, it grabs my code and builds it via ant/rake/whatever, then runs the suite of tests and some metrics (code coverage, performance, etc.). Is there some prohibitive barrier to entry I'm not considering?

    Read the article

  • php: fopen() of an URL breaks for domain names, not for numerical addresses

    - by b0fh
    After hours of trying to debug a third-party application having trouble with fopen(), i finally discovered that php -r 'echo(file_get_contents("http://www.google.com/robots.txt"));' fails, but php -r 'echo(file_get_contents("http://173.194.32.81/robots.txt"));' Succeeds. Note that as the webserver user, I can ping www.google.com and it resolves just fine. I straced both executions of PHP, and they diverge like this: For the numerical v4 URL: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("173.194 poll([{fd=3, events=POLLOUT}], 1, 0) = 0 (Timeout) ...[bunch of poll/select/recvfrom]... close(3) = 0 For the domain name: socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3 close(3) = 0 PHP didn't even try to do anything with that socket, it seems. Or even resolve the domain, for that matter. WTF ? Recompiling PHP with or without ipv6 support did not seem to matter. Disabling ipv6 on this system is not desirable. Gentoo Linux, PHP 5.3.14, currently giving a try to PHP 5.4 and see if it helps. Anyone has an idea ? EDIT: php -r 'echo gethostbyname("www.google.com");' Works and yield an ipv4, while php -r 'echo(file_get_contents("http://[2a00:1450:4007:803::1011]/"));' Seems to return a blank result. EDIT 2: I didn't even notice the first time, that the v6 socket opened when the name is used is a SOCK_DGRAM.

    Read the article

  • Algorithms for finding a numerical record in a list of ordered numbers

    - by Ankur
    I have a list of incomplete ordered numbers. I want to find a particular number with as few steps as possible. Are there any improvements on this algorithm, I assume you can count the set size without difficulty - it will be stored and updated every time a new item is added. Your object is to get your cursor over the value x The first number (smallest) is s, and the last number (greatest) is g. Take the midpoint m1 of the set: calculate is x < m1, If yes then s <= x < m1 If no then m1 < x <= g If m1 = x then you're done. Keep repeating till you find x. Basically dividing the set into two parts with each iteration till you hit x. The purpose is to retrieve a numerical id from a very large table to then find the associated other records. I would imagine this is the most trivial kind of indexing available, are there improvements?

    Read the article

  • Convert a Dynamic[] construct to a numerical list

    - by Leo Alekseyev
    I have been trying to put together something that allows me to extract points from a ListPlot in order to use them in further computations. My current approach is to select points with a Locator[]. This works fine for displaying points, but I cannot figure out how to extract numerical values from a construct with head Dynamic[]. Below is a self-contained example. By dragging the gray locator, you should be able to select points (indicated by the pink locator and stored in q, a list of two elements). This is the second line below the plot. Now I would like to pass q[[2]] to a function, or perhaps simply display it. However, Mathematica treats q as a single entity with head Dynamic, and thus taking the second part is impossible (hence the error message). Can anyone shed light on how to convert q into a regular list? EuclideanDistanceMod[p1_List, p2_List, fac_: {1, 1}] /; Length[p1] == Length[p2] := Plus @@ (fac.MapThread[Abs[#1 - #2]^2 &, {p1, p2}]) // Sqrt; test1 = {{1.`, 6.340196001221532`}, {1.`, 13.78779876355869`}, {1.045`, 6.2634018978377295`}, {1.045`, 13.754947081416544`}, {1.09`, 6.178367702583522`}, {1.09`, 13.72055251752498`}, {1.135`, 1.8183153704413153`}, {1.135`, 6.082497198000075`}, {1.135`, 13.684582525399742`}, {1.18`, 1.6809452373465104`}, {1.18`, 5.971583107298081`}, {1.18`, 13.646996905469383`}, {1.225`, 1.9480537697339537`}, {1.225`, 5.838386922625636`}, {1.225`, 13.607746407088161`}, {1.27`, 2.1183174369679234`}, {1.27`, 5.669799095595362`}, {1.27`, 13.566771130126131`}, {1.315`, 2.2572975468163463`}, {1.315`, 5.444014254828522`}, {1.315`, 13.523998701347882`}, {1.36`, 2.380307009155079`}, {1.36`, 5.153024664297602`}, {1.36`, 13.479342200528283`}, {1.405`, 2.4941312539733285`}, {1.405`, 4.861423833512566`}, {1.405`, 13.432697814928654`}, {1.45`, 2.6028066447609426`}, {1.45`, 4.619367407525507`}, {1.45`, 13.383942212133244`}}; DynamicModule[{p = {1.2, 10}, q = {1.3, 11}}, q := Dynamic@ First@test1[[ Ordering[{#, EuclideanDistanceMod[p, #, {1, .1}]} & /@ test1, 1, #1[[2]] < #2[[2]] &]]]; Grid[{{Show[{ListPlot[test1, Frame -> True, ImageSize -> 300], Graphics@Locator[Dynamic[p]], Graphics@ Locator[q, Appearance -> {Small}, Background -> Pink]}]}, {Dynamic@p}, {q},{q[[2]]}}]]

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • Installing Hyper-V Integration Components on Linux

    - by Lance Fisher
    Some big news this week was Microsoft released the Hyper-V integration components for Linux source code under the GPL v2. I just installed Ubuntu Server 9.04 in a Hyper-V VM with a Legacy Network Adapter. How do I install the integration components? Do I have to wait until they are included in the kernel?

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • Typical Applications of Linear System Solver in Game Developemnt

    - by craftsman.don
    I am going to write a custom solver for linear system. I would like to survey the typical problems involved the linear system solving in games. So that I can custom optimization on these problems based on the shape of the matrix. currently I am focus on these problems: B-Spline editing (I use a linear solve to resolve the C0, C1, C2 continuity) Constraint in Simulation (especially Position-Constraint, cloth) Both of them are Banded Matrix. I want to hear about some other applications of a linear system in games. Thank you.

    Read the article

  • How is this number calculated?

    - by Hamid
    I have numbers; A == 0x20000000 B == 18 C == (B/10) D == 0x20000004 == (A + C) A and D are in hex, but I'm not sure what the assumed numeric bases of the others are (although I'd assume base 10 since they don't explicitly state a base. It may or may not be relevant but I'm dealing with memory addresses, A and D are pointers. The part I'm failing to understand is how 18/10 gives me 0x4. Edit: Code for clarity: *address1 (pointer is to address: 0x20000000) printf("Test1: %p\n", address1); printf("Test2: %p\n", address1+(18/10)); printf("Test3: %p\n", address1+(21/10)); Output: Test1: 0x20000000 Test2: 0x20000004 Test3: 0x20000008

    Read the article

  • Equation solver project

    - by Victor Barbu
    I would like to start a project destinated to students. My application has to solve any kind of equation, passed by the user as a string, exactly like in Matlab solve function. How shluld I do this? What programming language is the best for this purpose? Thanks in advance. P.S This is a screenshot made in Matlab. This is how I would like the user to insert and receive the answer: Another example:

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

  • Performance issues with jms and spring integration. What is wrong with the following configuration?

    - by user358448
    I have a jms producer, which generates many messages per second, which are sent to amq persistent queue and are consumed by single consumer, which needs to process them sequentially. But it seems that the producer is much faster than the consumer and i am having performance and memory problems. Messages are fetched very very slowly and the consuming seems to happen on intervals (the consumer "asks" for messages in polling fashion, which is strange?!) Basically everything happens with spring integration. Here is the configuration at the producer side. First stake messages come in stakesInMemoryChannel, from there, they are filtered throw the filteredStakesChannel and from there they are going into the jms queue (using executor so the sending will happen in separate thread) <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <int:channel id="stakesInMemoryChannel" /> <int:channel id="filteredStakesChannel" > <int:dispatcher task-executor="taskExecutor"/> </int:channel> <bean id="stakeFilterService" class="cayetano.games.stake.StakeFilterService"/> <int:filter input-channel="stakesInMemoryChannel" output-channel="filteredStakesChannel" throw-exception-on-rejection="false" expression="true"/> <jms:outbound-channel-adapter channel="filteredStakesChannel" destination="stakesQueue" delivery-persistent="true" explicit-qos-enabled="true" /> <task:executor id="taskExecutor" pool-size="100" /> The other application is consuming the messages like this... The messages come in stakesInputChannel from the jms stakesQueue, after that they are routed to 2 separate channels, one persists the message and the other do some other stuff, lets call it "processing". <bean id="stakesQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg name="name" value="${jms.stakes.queue.name}" /> </bean> <jms:message-driven-channel-adapter channel="stakesInputChannel" destination="stakesQueue" acknowledge="auto" concurrent-consumers="1" max-concurrent-consumers="1" /> <int:publish-subscribe-channel id="stakesInputChannel" /> <int:channel id="persistStakesChannel" /> <int:channel id="processStakesChannel" /> <int:recipient-list-router id="customRouter" input-channel="stakesInputChannel" timeout="3000" ignore-send-failures="true" apply-sequence="true" > <int:recipient channel="persistStakesChannel"/> <int:recipient channel="processStakesChannel"/> </int:recipient-list-router> <bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy"> <property name="queuePrefetch" value="${jms.broker.prefetch.policy}" /> </bean> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <property name="targetConnectionFactory"> <bean class="org.apache.activemq.ActiveMQConnectionFactory"> <property name="brokerURL" value="${jms.broker.url}" /> <property name="prefetchPolicy" ref="prefetchPolicy" /> <property name="optimizeAcknowledge" value="true" /> <property name="useAsyncSend" value="true" /> </bean> </property> <property name="sessionCacheSize" value="10"/> <property name="cacheProducers" value="false"/> </bean>

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >