Search Results

Search found 1063 results on 43 pages for 'kevin dong nai jia'.

Page 5/43 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Weird ViewControllers - iPhone SDK

    - by Kevin
    Hello everyone, I have a Tab-Bar Application for this iPhone application I am making. I make a simple button on the 3rd view (3rd tab), and give it an IBAction to give an alert view. When I press build and go, everything works out fine. I go onto the 3rd tab, and I press my button. It simply crashes... Why is this happening? Everything I put in this 3rd tab crashes. I create a simple view controller, and write the class files to start over, but I keep getting the same errors.. Everything works fine on my first tab, where I originally got the first view controller.. P.S It also says Incomplete Implementation of Class 'ThirdViewController'. I don't know why its there.. If anyone can help me out here, i would greatly appreciate it. Kevin

    Read the article

  • Which Ruby gem should I use for updating a Twitter or Facebook status along with authlogic_rpx?

    - by Kevin
    Hi, My Rails webapp uses tardate's excellent authlogic_rpx gem so that users can register and sign in using their Twitter or Facebook account. Now I need to update a user Twitter or Facebook status. Which gem should I use for Twitter? and for Facebook? Or should I prefer Net::HTTP for both? Since the users authorised my app through authlogic_rpx, do I already have this authorised token to use the Twitter and Facebook APIs? If so, where can I find it? Thanks, Kevin

    Read the article

  • How can I put text from a multi-line text box into one string?

    - by Kevin van Zanten
    Dear reader, I'm faced with a bit of an issue. The scenario is that I have a multi-line text box and I want to put all that text into one single string, without any new lines in it. This is what I have at the moment: string[] values = tbxValueList.Text.Split('\n'); foreach (string value in values) { if (value != "" && value != " " && value != null && value != "|") { valueList += value; } } The problem is that no matter what I try and what I do, there is always a new line (at least I think?) in my string, so instead of getting: "valuevaluevalue" I get: "value value value". I've even tried to replace with string.Replace and regex.Replace, but alas to no avail. Please advise. Yours sincerely, Kevin van Zanten

    Read the article

  • Get String Instead of Source - Xcode Cocoa

    - by Kevin
    Hello everyone, I have a program that will scan the contents of a website, and display it in a textbox. The problem is that it shows the html source. For example if my html code was: <html> <body> <p>Hello</p> </body> </html> instead of just showing hello, it'll show the code above... How can I get my objective c program to just read the hello, and not the html source.. I was assuming that it was the encoding when reading the website, but I might be possibly wrong.. I would greatly appreciate it if someone could give me a reasonable answer.. Best Regards, Kevin

    Read the article

  • Cocoa - NSFileManager removeItemAtPath Not Working

    - by Kevin
    Hey there everyone, I am trying to delete a file, but somehow nsfilemanager will not allow me to do so. I do use the file in one line of code, but once that action has been ran, I want the file deleted. I have logged the error code and message and I get error code: 4 and the message: "text.txt" could not be removed Is there a way to fix this error "cleanly" (without any hacks) so that apple will accept this app onto their Mac App Store? EDIT: This is what I am using: [[NSFileManager defaultManager] removeItemAtPath:filePath error:NULL]; Thanks, kevin

    Read the article

  • Weird Jquery/CSS Menu Issue

    - by Kevin Z
    Hey All, I have a jquery drop down menu with jquery and css to style it. However, every time you hover over the menu options and go back and forth, it seems to leave pieces of the menu left over. Any ideas where this is coming from and how to get rid of it? Here is the code in a jsfiddle: http://jsfiddle.net/2msuP/2/ See the page and how it works here: http://f4design.com/clients/bigiochame/index.html I am noticing it in Safari. It may not be apparent in all browsers. However, the main user for this site will be Safari users. Any help is appreciated! Thanks! Kevin

    Read the article

  • Best way to position a two part background in CSS?

    - by Kevin Z
    Hey All! I have a weird background I am trying to figure out the best way to style it. So, there are two parts to the background image, the top part which has a unique horizontal and vertical design (its about 1024x700) then a bottom section that has a unique style horizontally, but can be repeated vertically . (1024 x 1) Right now I have the top section being a background image for the header, the problem is that it screws me up for styling all of the page content because it is so big! What would be the best way to code a two piece background like that in HTML and CSS? Thanks! Kevin

    Read the article

  • How to find the most recent associations created between two objects with Rails?

    - by Kevin
    Hi, I have a user model, a movie model and this association in user.rb (I use has_many :through because there are other associations between these two models): has_many :has_seens has_many :movies_seen, :through = :has_seens, :source = :movie I'm trying to get an array of the ten most recent has_seens associations created. For now, the only solution I found is to crawl through all the users, creating an array with every user.has_seens found, then sort the array by has_seen.created_at and only keep the last 10 items… Seems like a heavy operation. Is there a better way? Kevin

    Read the article

  • Count(*) vs Count(1)

    - by Nai
    Hi, just wondering if any of you guys use Count(1) over Count(*) and if there is a noticeable difference for SQL Server 2005 in performance? Or is this just a legacy habit that has been brought forward from days gone past?

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • Microsoft Business Intelligence. Is what I am trying to do possible?

    - by Nai
    Hi guys, I have been charged with the task of analysing the log table of my company's website. This table contains a user's click path throughout the website for a given session. My company is looking to understand/spot trends based on the 'click paths' of our users. In doing so, identify groups of users that take on a certain 'click path' based on age/geography and so on. As you can tell from the title, I am completely new to BI and its capabilities so I was wondering: Are our objectives attainable? How should I go about doing this? I am currently reading books online as well as other e-books I have found. All signs seem to suggest this is possible via sequence clustering. Although the exact implementation and tweaks involved are currently lost on me. Therefore, if anyone has first hand experience in such an undertaking, I would be awesome if you could share it here. Cheers!

    Read the article

  • SQL2k8 T-SQL: Output into XML file

    - by Nai
    I have two tables Table Name: Graph UID1 UID2 ----------- 12 23 12 32 41 51 32 41 Table Name: Profiles NodeID UID Name ----------------- 1 12 Robs 2 23 Jones 3 32 Lim 4 41 Teo 5 51 Zacks I want to get an xml file like this: <graph directed="0"> <node id="1"> <att name="UID" value="12"/> <att name="Name" value="Robs"/> </node> <node id="2"> <att name="UID" value="23"/> <att name="Name" value="Jones"/> </node> <node id="3"> <att name="UID" value="32"/> <att name="Name" value="Lim"/> </node> <node id="4"> <att name="UID" value="41"/> <att name="Name" value="Teo"/> </node> <node id="5"> <att name="UID" value="51"/> <att name="Name" value="Zacks"/> </node> <edge source="12" target="23" /> <edge source="12" target="32" /> <edge source="41" target="51" /> <edge source="32" target="41" /> </graph> Thanks very much!

    Read the article

  • How would I use HTMLAgilityPack to extract the value I want

    - by Nai
    For the given HTML I want the value of id <div class="name" id="john-5745844"> <div class="name" id="james-6940673"> UPDATE This is what I have at the moment HtmlDocument htmlDoc = new HtmlDocument(); htmlDoc.Load(new StringReader(pageResponse)); HtmlNode root = htmlDoc.DocumentNode; List<string> anchorTags = new List<string>(); foreach (HtmlNode div in root.SelectNodes("//div[@class='name' and @id]")) { HtmlAttribute att = div.Attributes["id"]; Console.WriteLine(att.Value); } The error I am getting is at the foreach line stating: Object reference not set to an instance of an object.

    Read the article

  • SQL Server 2008: Getting duration between user sessions

    - by Nai
    I have this table UserID SessionID SessionStart SessionEnd ----------------------------------------------- 1 abc1 2010-1-1 2010-1-2 5 def3 2010-1-5 2010-1-9 1 llk0 2010-1-10 2010-1-11 5 spo8 2010-1-13 2010-1-15 1 pie7 2010-1-16 2010-1-29 I would like to be able to find the days between the end of one session to the start of the next session for each particular user. So I am looking to get something like UserID DaysBetweenSessions ----------------------------- 1 8 1 5 5 4 Thanks!

    Read the article

  • How to use a loop to download HTML with paging?

    - by Nai
    I want to loop through this URL and download the HTML. https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" +"&start=" + i start and num controls the paging of the URL. So if &start=2, and &num=10, it will scrape 10 results from page 2. Given that Google has a max limit of num = 10, how can I write a loop that loops through the HTML and scrape the results for the first 10 pages? This is what I have so far which just scrapes the first page. //input search term Console.WriteLine("What is your search query?:"); string searchTerm = Console.ReadLine(); //concantenate the strings using + symbol to make it URL friendly for google string searchTermFormat = searchTerm.Replace(" ", "+"); //create a new instance of Webclient and use DownloadString method from the Webclient class to extract download html WebClient client = new WebClient(); int i = 1; string Json = client.DownloadString("https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" + "&start=" + i); //create a new instance of JavaScriptSerializer and deserialise the desired content JavaScriptSerializer js = new JavaScriptSerializer(); GoogleSearchResults results = js.Deserialize<GoogleSearchResults>(Json); //output results to console Console.WriteLine(js.Serialize(results)); Console.ReadLine();

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • Understanding WebRequest

    - by Nai
    I found this snippet of code here that allows you to log into a website and get the response from the logged in page. However, I'm having trouble understanding all the part of the code. I've tried my best to fill in whatever I understand so far. Hope you guys can fill in the blanks for me. Thanks string nick = "mrbean"; string password = "12345"; //this is the query data that is getting posted by the website. //the query parameters 'nick' and 'password' must match the //name of the form you're trying to log into. you can find the input names //by using firebug and inspecting the text field string postData = "nick=" + nick + "&password=" + password; // this puts the postData in a byte Array with a specific encoding //Why must the data be in a byte array? byte[] data = Encoding.ASCII.GetBytes(postData); // this basically creates the login page of the site you want to log into WebRequest request = WebRequest.Create("http://www.mrbeanandme.com/login/"); // im guessing these parameters need to be set but i dont why? request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; // this opens a stream for writing the post variables. // im not sure what a stream class does. need to do some reading into this. Stream stream = request.GetRequestStream(); // you write the postData to the website and then close the connection? stream.Write(data, 0, data.Length); stream.Close(); // this receives the response after the log in WebResponse response = request.GetResponse(); stream = response.GetResponseStream(); // i guess you need a stream reader to read a stream? StreamReader sr = new StreamReader(stream); // this outputs the code to console and terminates the program Console.WriteLine(sr.ReadToEnd()); Console.ReadLine();

    Read the article

  • SQL Server Multiple Running Totals

    - by Nai
    I have a table like this UserID Score Date 5 6 2010-1-1 7 8 2010-1-2 5 4 2010-1-3 6 3 2010-1-4 7 4 2010-1-5 6 1 2010-1-6 I would like to get a table like this UserID Score RunningTotal Date 5 6 6 2010-1-1 5 4 10 2010-1-3 6 3 3 2010-1-4 6 1 4 2010-1-6 7 8 8 2010-1-2 7 4 12 2010-1-5 Thanks!

    Read the article

  • Count for consecutive records

    - by Nai
    I have a table as follows > RowID SessionID EventID IPAddress RequestedURL Date > 1 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 Start 24/03/2010 19:52 > 2 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 BuyNow 24/03/2010 19:52 > 3 m2jqyc45gtjmvb55dc4dg 28 82.23.149.238 Clicked OK 24/03/2010 19:52 > 4 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 ProductPage 24/03/2010 19:52 > 5 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 Home 24/03/2010 19:56 > 6 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 ProductPage 24/03/2010 19:56 > 7 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 BuyNow 24/03/2010 19:56 > 8 m2jqyc45gtjmvb55dc4dg 28 82.23.149.238 Clicked OK 24/03/2010 19:56 > 9 m2jqyc45gtjmvb55dc4dg 1 82.23.149.238 Home 24/03/2010 19:56 How do I write a query that does a count whenever the rows BuyNow and Clicked OK have been recorded consecutively. For example, the dataset above, the return count should be 2.

    Read the article

  • Keyword sorting algorithm

    - by Nai
    I have over 1000 surveys, many of which contains open-ended replies. I would like to be able to 'parse' in all the words and get a ranking of the most used words (disregarding common words) to spot a trend. How can I do this? Is there a program I can use? EDIT If a 3rd party solution is not available, it would be great if we can keep the discussion to microsoft technologies only. Cheers.

    Read the article

  • Need help with Django tutorial

    - by Nai
    I'm doing the Django tutorial here: http://docs.djangoproject.com/en/1.2/intro/tutorial03/ My TEMPLATE_DIRS in the settings.py looks like this: TEMPLATE_DIRS = ( "/webapp2/templates/" "/webapp2/templates/polls" # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. ) My urls.py looks like this: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^polls/$', 'polls.views.index'), (r'^polls/(?P<poll_id>\d+)/$', 'polls.views.detail'), (r'^polls/(?P<poll_id>\d+)/results/$', 'polls.views.results'), (r'^polls/(?P<poll_id>\d+)/vote/$', 'polls.views.vote'), (r'^admin/', include(admin.site.urls)), ) My views.py looks like this: from django.template import Context, loader from polls.models import Poll from django.http import HttpResponse def index(request): latest_poll_list = Poll.objects.all().order_by('-pub_date')[:5] t = loader.get_template('c:/webapp2/templates/polls/index.html') c = Context({ 'latest_poll_list': latest_poll_list, }) return HttpResponse(t.render(c)) I think I am getting the path of my template wrong because when I simplify the views.py code to something like this, I am able to load the page. from django.http import HttpResponse def index(request): return HttpResponse("Hello, world. You're at the poll index.") My index template file is located at C:/webapp2/templates/polls/index.html. What am I doing wrong?

    Read the article

  • SQL Server 2008 2mb Limit for XML??

    - by Nai
    I am trying to output a long XML result from SMSS. When I right click on the results and 'save results as...', I can only get a 2mb file? I have changed the settings in SMSS via Tools - Options - Query Results - SQL Server - Results to Grid, for XML data to be unlimited. Forever, it still seems to be truncating my XML results? So, how can I bypass this problem and output my XML result to a file? Thanks

    Read the article

  • Optimising a query for Top 5% of users

    - by Nai
    On my website, there exists a group of 'power users' who are fantastic and adding lots of content on to my site. However, their prolific activities has led to their profile pages slowing down a lot. For 95% of the other users, the SPROC that is returning the data is very quick. It's only for these group of power users, the very same SPROC is slow. How does one go about optimising the query for this group of users? You can assume that the right indexes have already been constructed. EDIT: Ok, I think I have been a bit too vague. To rephrase the question, how can I optimise my site to enhance the performance for these 5% of users. Given that this SPROC is the same one that is in use for every user and that it is already well optimised, I am guessing the next steps are to explore caching possibilities on the data and application layers?

    Read the article

  • Grow Your Business with Security

    - by Darin Pendergraft
    Author: Kevin Moulton Kevin Moulton has been in the security space for more than 25 years, and with Oracle for 7 years. He manages the East EnterpriseSecurity Sales Consulting Team. He is also a Distinguished Toastmaster. Follow Kevin on Twitter at twitter.com/kevin_moulton, where he sometimes tweets about security, but might also tweet about running, beer, food, baseball, football, good books, or whatever else grabs his attention. Kevin will be a regular contributor to this blog so stay tuned for more posts from him. It happened again! There I was, reading something interesting online, and realizing that a friend might find it interesting too. I clicked on the little email link, thinking that I could easily forward this to my friend, but no! Instead, a new screen popped up where I was asked to create an account. I was expected to create a User ID and password, not to mention providing some personally identifiable information, just for the privilege of helping that website spread their word. Of course, I didn’t want to have to remember a new account and password, I didn’t want to provide the requisite information, and I didn’t want to waste my time. I gave up, closed the web page, and moved on to something else. I was left with a bad taste in my mouth, and my friend might never find her way to this interesting website. If you were this content provider, would this be the outcome you were looking for? A few days later, I had a similar experience, but this one went a little differently. I was surfing the web, when I happened upon some little chotcke that I just had to have. I added it to my cart. When I went to buy the item, I was again brought to a page to create account. Groan! But wait! On this page, I also had the option to sign in with my OpenID account, my Facebook account, my Yahoo account, or my Google Account. I have all of those! No new account to create, no new password to remember, and no personally identifiable information to be given to someone else (I’ve already given it all to those other guys, after all). In this case, the vendor was easy to deal with, and I happily completed the transaction. That pleasant experience will bring me back again. This is where security can grow your business. It’s a differentiator. You’ve got to have a presence on the web, and that presence has to take into account all the smart phones everyone’s carrying, and the tablets that took over cyber Monday this year. If you are a company that a customer can deal with securely, and do so easily, then you are a company customers will come back to again and again. I recently had a need to open a new bank account. Every bank has a web presence now, but they are certainly not all the same. I wanted one that I could deal with easily using my laptop, but I also wanted 2-factor authentication in case I had to login from a shared machine, and I wanted an app for my iPad. I found a bank with all three, and that’s who I am doing business with. Let’s say, for example, that I’m in a regular Texas Hold-em game on Friday nights, so I move a couple of hundred bucks from checking to savings on Friday afternoons. I move a similar amount each week and I do it from the same machine. The bank trusts me, and they trust my machine. Most importantly, they trust my behavior. This is adaptive authentication. There should be no reason for my bank to make this transaction difficult for me. Now let's say that I login from a Starbucks in Uzbekistan, and I transfer $2,500. What should my bank do now? Should they stop the transaction? Should they call my home number? (My former bank did exactly this once when I was taking money out of an ATM on a business trip, when I had provided my cell phone number as my primary contact. When I asked them why they called my home number rather than my cell, they told me that their “policy” is to call the home number. If I'm on the road, what exactly is the use of trying to reach me at home to verify my transaction?) But, back to Uzbekistan… Should my bank assume that I am happily at home in New Jersey, and someone is trying to hack into my account? Perhaps they think they are protecting me, but I wouldn’t be very happy if I happened to be traveling on business in Central Asia. What if my bank were to automatically analyze my behavior and calculate a risk score? Clearly, this scenario would be outside of my typical behavior, so my risk score would necessitate something more than a simple login and password. Perhaps, in this case, a one-time password to my cell phone would prove that this is not just some hacker half way around the world. But, what if you're not a bank? Do you need this level of security? If you want to be a business that is easy to deal with while also protecting your customers, then of course you do. You want your customers to trust you, but you also want them to enjoy doing business with you. Make it easy for them to do business with you, and they’ll come back, and perhaps even Tweet about it, or Like you, and then their friends will follow. How can Oracle help? Oracle has the technology and expertise to help you to grown your business with security. Oracle Adaptive Access Manager will help you to prevent fraud while making it easier for your customers to do business with you by providing the risk analysis I discussed above, step-up authentication, and much more. Oracle Mobile and Social Access Service will help you to secure mobile access to applications by expanding on your existing back-end identity management infrastructure, and allowing your customers to transact business with you using the social media accounts they already know. You also have device fingerprinting and metrics to help you to grow your business securely. Security is not just a cost anymore. It’s a way to set your business apart. With Oracle’s help, you can be the business that everyone’s tweeting about. Image courtesy of Flickr user shareski

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >