Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 719/911 | < Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >

  • How do I identify the referrer page in ASP.NET?

    - by dotnet-practitioner
    In VS2003, I am trying to find out the particular page where the request is coming from. I want to identify the exact aspx page name. Is there a way to only get the page name or some how strip the page name? Currently I am using the following instruction... string referencepage = HttpContext.Current.Request.UrlReferrer.ToString(); and I get the following result... "http://localhost/MyPage123.aspx?myval1=3333&myval2=4444; I want to get the result back with out any query string parameters and be able to identify the page MyPage123.aspx accurately... How do I do that??

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • Unit Testing & Fake Repository implementation with cascading CRUD operations

    - by Erik Ashepa
    Hi, i'm having trouble writing integration tests which use a fake repository, For example : Suppose I have a classroom entity, which aggregates students... var classroom = new Classroom(); classroom.Students.Add(new Student("Adam")); _fakeRepository.Save(classroom); _fakeRepostiory.GetAll<Student>().Where((student) => student.Name == "Adam")); // This query will return null... When using my real implementation for repository (NHibernate based), the above code works (because the save operation would cascade to the student added at the previous line), Do you know of any fake repository implementation which support this behaviour? Ideas on how to implement one myself? Or do you have any other suggestions which could help me avoid this issue? Thanks in advance, Erik.

    Read the article

  • In Django, can you add a method to querysets?

    - by Paul D. Waite
    In Django, if I have a model class, e.g. from django.db import models class Transaction(models.Model): ... then if I want to add methods to the model, to store reasonably complex filters, I can add a custom model manager, e.g. class TransactionManager(models.Manager): def reasonable_complex_filter(self): return self.get_query_set().filter(...) class Transaction(models.Model): objects = TransactionManager() And then I can do: >>> Transaction.objects.reasonably_complex_filter() Is there any way I can add a custom method that can be chained to the end of a query set from the model? I.e. add the custom method in such a way that I can do this: >>> Transaction.objects.filter(...).reasonably_complex_filter()

    Read the article

  • ASP.NET MVC2 Data Access Layer

    - by Paul
    For a small/medium sized project I'm trying to figure out what is the 'ideal' way to have a domain layer and data access layer. My opinions on coupling tend to be more towards the view that the domain models should not be tightly coupled with the database layer, in other words the data access layer shouldn't actually know anything about the domain objects. I've been looking at Linq-to-sql and it wants to use its own models that it creates, and so it ends up VERY tightly coupled. Whilst I love the way you use linq-to-sql in code I really don't like the way it wants to make its own domain objects. What are some alternatives that I should consider? I tried use NHibernate but I did not like the way I had to use to query and get different objects. I honestly love the syntax and way you use linq, I just don't want it to be so tightly coupled to domain objects.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Common way to compare timestamp in oracle, postgres and mssql

    - by Pratik
    Hi There! I am writing a sql query which involves finding if timestamp falls in particular range of days . I have written that in the postgres but it doesn't works in oracle and msssql. Is there are common way to compare the timestamp across different database. My postgres sql looks something like this ... AND creation_date < (CURRENT_TIMESTAMP - interval '5 days') AND creation_date >= (CURRENT_TIMESTAMP - interval '15 days') ... Thanks! Pratik

    Read the article

  • SQL Server CONTAINS with digits gives no results

    - by dale
    Hi, I have a database table which is full-text indexed and i use the CONTAINS-function to perform a search-query on it. When I do: SELECT * FROM Plants WHERE CONTAINS(Plants.Description, '"Plant*" AND "one*"'); I get back all correct results matching a description with the words "Plant" and "one". Some plant are named like "Plant 1", "Plant 2" etc. and this is the problem. When i do this, i get no results: SELECT * FROM Plants WHERE CONTAINS(Plants.Description, '"Plant*" AND "1*"'); Anyone know why?

    Read the article

  • SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

    - by user197529
    An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill. At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data. Any suggestions are greatly appreciated. Kind Regards, Bernie

    Read the article

  • Ethernet Communication Error

    - by SivaKumar
    Hi, I wrote a program to query the status of the Ethernet printer for that i created a TCP Stream Socket and i send the query command to the printer.In case of Error less condition it returns No error status but in error case its getting hang at recv command.Even i used Non blocking now the recv command returns nothing and error set as Resource temporarily unavailable. code: #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <unistd.h> #include <arpa/inet.h> #include <errno.h> #include <stdlib.h> #include <fcntl.h> #include <sys/ioctl.h> #include <sys/socket.h> #include <signal.h> #include <termios.h> #include <poll.h> #include <netinet/tcp.h> #include <stdarg.h> int main() { int ConnectSocket,ConnectSocket1,select_err,err,nRet,nBytesRead; struct timeval waitTime = {10,30}; fd_set socket_set; unsigned char * dataBuf = NULL; unsigned char tempVar, tempVar1, tempVar2, tempVar3; char reset[] = "\033E 2\r"; char print[] = "\033A 1\r"; char buf[1024]={0}; ConnectSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); printf("The Socket ID is %d\n",ConnectSocket); if (ConnectSocket < 0) { perror("socket()"); return 0; } struct sockaddr_in clientService; clientService.sin_family = AF_INET; clientService.sin_addr.s_addr = inet_addr("192.168.0.129"); //Printer IP clientService.sin_port = htons( 9100); // Printer Port if ( connect( ConnectSocket, (struct sockaddr*) &clientService, sizeof(clientService) ) == -1) { perror("connect()"); close(ConnectSocket); return -1; } /* if((nRet = ioctl(ConnectSocket , FIONREAD, &nBytesRead) == -1)) { perror("ioctl()"); } perror("ioctl()"); */ FD_ZERO(&socket_set); FD_SET(ConnectSocket, &socket_set); do { errno=0; select_err = select(ConnectSocket+1, NULL, &socket_set, NULL, &waitTime); }while(errno==EINPROGRESS); if (-1 == select_err || 0 == select_err) { int optVal = 0; int optLen = sizeof(optVal); if(select_err == -1) { perror("select() write-side"); } else { //Timeout errno=0; err = getsockopt(ConnectSocket, SOL_SOCKET, SO_ERROR, (char*)&optVal, &optLen); printf("the return of the getsockopt is %d\n",err); printf("the opt val is %s\n",(char*)optVal); perror("getsockopt()"); if(err == -1) { perror("getsockopt() write-side"); } } printf("Select Failed during write - ConnectSocket: %d\n", ConnectSocket); //close(ConnectSocket); return -1; } err = send(ConnectSocket,print,sizeof(print)-1, 0); printf("\n No of Bytes Send is %d\n",err); if(err == -1 || err ==0) { perror("send()"); //close(ConnectSocket); return -1; } FD_ZERO(&socket_set); FD_SET(ConnectSocket, &socket_set); do { errno=0; select_err = select(ConnectSocket+1, NULL, &socket_set, NULL, &waitTime); }while(errno==EINPROGRESS); if (-1 == select_err || 0 == select_err) { printf("Select Failed during write - ConnectSocket: %d\n", ConnectSocket); return -1; } err = send(ConnectSocket,reset,sizeof(reset)-1, 0); printf("\n No of Bytes Send is %d\n",err); if(err == -1 || err ==0) { perror("send()"); //close(ConnectSocket); return -1; } FD_ZERO(&socket_set); FD_SET(ConnectSocket, &socket_set); printf("i am in reading \n"); select_err = select(ConnectSocket+1, &socket_set, NULL, NULL, &waitTime); printf("the retun of the read side select is %d \n",select_err); perror("select()"); if (-1 == select_err|| 0 == select_err) { printf("Read timeout; ConnectSocket: %d\n", ConnectSocket); close(ConnectSocket); perror("close()"); return -1; } printf("Before Recv\n"); nBytesRead = recv(ConnectSocket , buf, 1024, 0); printf("No of Bytes read is %d\n",nBytesRead); printf("%s\n",buf); if(nBytesRead == -1) { perror("recv()"); close(ConnectSocket); perror("clode()"); return -1; } close(ConnectSocket); return 1; }

    Read the article

  • how to group by over anonymous type with vb.net linq to object

    - by smoothdeveloper
    I'm trying to write a linq to object query in vb.net, here is the c# version of what I'm trying to achieve (I'm running this in linqpad): void Main() { var items = GetArray( new {a="a",b="a",c=1} , new {a="a",b="a",c=2} , new {a="a",b="b",c=1} ); ( from i in items group i by new {i.a, i.b} into g let p = new{ k = g, v = g.Sum((i)=>i.c)} where p.v > 1 select p ).Dump(); } // because vb.net doesn't support anonymous type array initializer, it will ease the translation T[] GetArray<T>(params T[] values){ return values; } I'm having hard time with either the group by syntax which is not the same (vb require 'identifier = expression' at some places, as well as with the summing functor with 'expression required' ) Thanks so much for your help!

    Read the article

  • Flot Pie Chart using Ajax, Php and MySql

    - by Neriza Almirol
    Good Day! Can you help me? I have a problem getting values from database. I want to control the legend. I've been googling the best approach for the pie chart, but still looking for the best answer for my problem. It's easy to query the data from the database, but I want to show it using the flot pie chart and I need it for statistic reports. From database, I need to get the percentage of Female and Male followers and separate it according to age groups. The data (dateOfbirth) is available from our database using facebook integration. Can you give me an example using Ajax, Php and MySql? Thank you! :) $.plot($("#graph_3"), graphData, { series: { pie: { show: true } }, grid: { hoverable: true, clickable: true } }); $("#graph_3").bind("plothover", pieHover); $("#graph_3").bind("plotclick", pieClick);

    Read the article

  • android.app.Application subclass, onTerminate is not being called

    - by synic
    From the documentation for android.app.Application: "Base class for those who need to maintain global application state" I am using my own subclass to maintain an object that I'm using to query a server. Also from the documentation: "onTerminate() Called when the application is stopping." However, onTerminate() in my class is never called. I press the back button while viewing my main activity, and everything seems to shut down. My main Activity's onDestroy() method is called and isFinishing() returns true, but my android.app.Application's onTerminate() method is never called. Why is this? What am I missing? Is there something that is keeping it open?

    Read the article

  • Preventing LDAP injection

    - by Matias
    I am working on my first desktop app that queries LDAP. I'm working in C under unix and using opends, and I'm new to LDAP. After woking a while on that I noticed that the user could be able to alter the LDAP query by injecting malicious code. I'd like to know which sanitizing techniques are known, not only for C/unix development but in more general terms, i.e., web development etc. I thought that escaping equals and semicolons would be enough, but not sure. Here is a little piece of code so I can make clearer the question: String ldapSearchQuery = "(cn=" + $userName + ")"; System.out.println(ldapSearchQuery); Obviously I do need to sanitize $userName, as stated in this OWASP ARTICLE

    Read the article

  • Recursive CTE Problem

    - by Chris
    Hi, I am trying to use a recursive CTE in SQL Server to build up a predicate formula from a table containing the underlying tree structure. For example, my table looks like: -------------------------- Id Operator/Val ParentId -------------------------- 1. 'OR' NULL 2. 'AND' 1 3. 'AND' 1 4. '' 2 5. 'a' 4 6. 'alpha' 4 : : : -------------------------- which represents ((a alpha) AND (b beta)) OR ((c gamma) AND (a < delta)). ParentId is a reference to the Id in the same table of the parent node. I want to write a query which will build up this string from the table. Is it possible? Thanks

    Read the article

  • WPF DataGrid issue with db40

    - by Rich Blumer
    I am using the following code to populate a wpf datagrid with items in my db4o OODB: IObjectContainer db = Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), "C:\Dev\ContractKeeper\Database\ContractKeeper.yap"); var contractTypes = db.Query(typeof(ContractType)); this.dataGrid1.ItemsSource = contractTypes.ToList(); Here is the XAML: <Window x:Class="ContractKeeper.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:dg="http://schemas.microsoft.com/wpf/2008/toolkit" Title="Window1" Height="300" Width="300"> <Grid> <dg:DataGrid AutoGenerateColumns="True" Margin="12,102,12,24" Name="dataGrid1" /> </Grid> </Window> When the items get bound to the datagrid, the gridlines appear like there are records but no data is displayed. Has anyone had this issue with db4o and the wpf datagrid?

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Updating records with their subordinates via CTE or subquery

    - by Mike Jolley
    Let's say I have a table with the following columns: Employees Table employeeID int employeeName varchar(50) managerID int totalOrganization int managerID is referential to employeeID. totalOrganization is currently 0 for all records. I'd like to update totalOrganization on each row to the total number of employees under them. So with the following records: employeeID employeeName managerID totalOrganization 1 John Cruz NULL 0 2 Mark Russell 1 0 3 Alice Johnson 1 0 4 Juan Valdez 3 0 The query should update the totalOrganizations to: employeeID employeeName managerID totalOrganization 1 John Cruz NULL 3 2 Mark Russell 1 0 3 Alice Johnson 1 1 4 Juan Valdez 3 0 I know I can get somewhat of an org. chart using the following CTE: WITH OrgChart (employeeID, employeeName,managerID,level) AS ( SELECT employeeID,employeeName,0 as managerID,0 AS Level FROM Employees WHERE managerID IS NULL UNION ALL SELECT Employees.employeeID,Employees.employeeName,Employees.managerID,Level + 1 FROM Employees INNER JOIN OrgChart ON Employees.managerID = OrgChart.employeeID ) SELECT employeeID,employeeName,managerID, level FROM OrgChart; Is there any way to update the Employees table using a stored procedure rather than building some routine outside of SQL to parse through the data?

    Read the article

  • UDF call in entity framework is cached

    - by Fred Yang
    I am doing a test after reading an article http://blogs.msdn.com/alexj/archive/2009/08/07/tip-30-how-to-use-a-custom-store-function.aspx about udf function called. When I use a function with objectContext.Entities.Where( t= udf(para1, para2) == 1), here the Entities is not ObjectQuery, but an ObjectSet, the first time I call the method, it runs correctly, if I reuse the objectContext,and run it again but with different para1, para2, then the previous parameter values still cached, and the result is same as previous one, which is wrong. The sql profiler shows that both query hit the database, but the t-sql is the same. Am I missing something? And the ObjectSet does not support .where(esql_string). How to get udf working with ObjectSet? Thanks Fred

    Read the article

  • jquery dataTables plugin: dynamically modify ajaxSource

    - by Anthony Koval'
    hello! on my page i have dataTable, which was initialized with, for example, sAjaxSource url like "/api/reports". when we're doing sorting, filtering it appends to url additional query-keys. I want do add keys "date_from" and "date_to" to sAjaxSource url (date intervals could be changed after table initialization). is there any entry-point function, before table reload, so i can do there smth like: var oSettings = rtbl.fnSettings(); oSettings.sAjaxSource = "/api/reports/?type=sites&date_from=" + $("#date_from").text() + "&date_to=" + $("#date_to").text(); thanks for your help!

    Read the article

  • Web Service to connect to an API and get the response back from the API

    - by Scarlette_June
    This is a general Programming question I'm new to Java Web services programming using Apache Axis and JAX-RPC. We need to build 2 components,a App engine (Shopping cart, Payment Gateway integration etc..) and a UI Control Panel over an existing API. The API understands only XML.How we must communicate with the API? link text We have been asked to write a Web Service to establish the communication. Please provide the steps and a Code example/snippet on how to connect to an existing API through a Webservice and get the response back from the API to the calling Webservice. John,I hope I have been able to explain my query.If you have ideas on how to communicate with the API to get the desired result to the user,Please let us know. We have just started our careers in technology a year back post our graduation and this project is our very first Java EE project.

    Read the article

  • Java - When to use Iterators?

    - by Walter White
    Hi all, I am trying to better understand when I should and should not use Iterators. To me, whenever I have a potentially large amount of data to iterate through, I write an Iterator for it. If it also lends itself to the Iterator interface, then it seems like a win. I was reading a little bit that there is a lot of overhead with using an Iterator. A good example of where I used an Iterator was to iterate through a bunch of SQL scripts to execute one query at a time, reading it in, then executing it. Is there another performance trade off I should be aware of? Before I used iterators, I would read the entire String of SQL commands to execute into an ArrayList, and the iterate through that. If the import is rather large (like for geolocation data, then the server tends to get bogged down). Walter

    Read the article

  • PHP mySQL count number of fields not empty

    - by Pez Cuckow
    I have a database of users where they can send messages to other users (up to four) and the ID of the message they sent is in their user row. e.g. Name, Email, Msg1, Msg2, Msg3, Msg4 Pez, [email protected], 1, 55, 42, 5 //Send 4 messages Steve, [email protected], 0, 0, 0, 0 //Send 0 messages Leon, [email protected], 3, 0, 3, 5 //Send 3 messages How in a MySQL query can I get the amount of those message rows that are not empty or not equal to 0, allowing me to order by that? So it would return Pez - 4 Mesasges Leon - 3 Messages Steve - 0 Messages Im my mind something like order by count(!empty(msg1)+!empty(msg2)+!empty(msg3)+!empty(msg4)) Many thanks,

    Read the article

  • Best practices in ASP.Net code behind pages.

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

  • jQuery autocomplete pass null paramter to the controller in ASP.NET MVC 2

    - by myaesubi
    I'm using jQuery autocomplete plugin from jQuery website calling the controller url which return json in return. The problem is the parameter sent to the controller is always null. Here is the in-browser jQuery code for the autocomplete: $(document).ready(function() { var url = "/Building/GetMatchedCities"; $("#City").autocomplete(url); }); and here is the ASPNET MVC controller signature in C#: public JsonResult GetMatchedCities(string city) { .. return this.Json(query, JsonRequestBehavior.AllowGet); } Thanks in advance, Mohammad

    Read the article

< Previous Page | 715 716 717 718 719 720 721 722 723 724 725 726  | Next Page >