Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 165/399 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Accessing UI elements from delegate function in Windows Phone 7

    - by EpsilonVector
    I have the following scenario: a page with a bunch of UI elements (grids, textblocks, whatever), and a button that when clicked launches an asynchronous network transaction which, when finished, launches a delegate function. I want to reference the page's UI elements from that delegate. Ideally I would like to do something like currentPage.getUIElementByName("uielement").insert(data), or even uielement.insert(data), or something similar. Is there a way to do this? No matter what I try an exception is being thrown saying that I don't have permissions to access that element. Is there a more correct way to handle updating pages with data retrieved over network?

    Read the article

  • Implementation of instance testing in Java, C++, C#

    - by Jake
    For curiosity purposes as well as understanding what they entail in a program, I'm curious as to how instance testing (instanceof/is/using dynamic_cast in c++) works. I've tried to google it (particularly for java) but the only pages that come up are tutorials on how to use the operator. How do the implementations vary across those langauges? How do they treat classes with identical signatures? Also, it's been drilled into my head that using instance testing is a mark of bad design. Why exactly is this? When is that applicable, instanceof should still be used in methods like .equals() and such right? I was also thinking of this in the context of exception handling, again particularly in Java. When you have mutliple catch statements, how does that work? Is that instance testing or is it just resolved during compilation where each thrown exception would go to?

    Read the article

  • Are there any guidelines for laying out screen "real estate?"

    - by Corey
    I'm wondering if there is any information about creating a decent page layout so that your website will appeal to users of all resolutions. For example, the optimal width for pages. It seems like on my resolution, most websites have their content centered and covers about 80% of the page, which is easy on the eyes. Or maybe the height of the website's logo/header -- some sites I stumble upon have a huge logo with links or navigation under it, making it so that I need to scroll down to see the actual content, like articles or images (these sites don't keep me for very long). I understand that every user is different and may have browser extensions, page zoom or may be running some ancient system that displays in 640x480. I'm not looking for a "best" solution, but rather, some guidelines about designing to accommodate different resolutions. Basically, how can I make sure that I don't design a page where a paragraph might display in several easy-to-read lines on my resolution, but it turns into a single line on a 1920x1080 resolution and makes it hard for the user to follow?

    Read the article

  • I need a webpage to host my javascript!

    - by Amir Reza
    Does anyone know a website that hosts javascripts on their page? I have a research project that needs to collect some RTT from all over the world and compare them together. I have written the javascript code for that but I do not have a high hit rate website to put it on to collect data. I know it is a little bit odd question to ask, but do you know any website or any trick that can help me? Note that the script would not do any harm to anybody! :-) Thanks, Decad is right, I basically need some people to put my script on their "high-hit rate" website ... so I can collect data from large number of clients... Of coarse, the script is run on the background with no harm to the page. It basically measures some RTT and submit it to a server. I already have some pages, but they barely got a hit from outside! Thanks,

    Read the article

  • Receiving requests where absolute URL on page are morphed to relative URLs

    - by Jacob
    In our web pages, we have a hyperlink with an href to an absolute URL: https://some.other.host.com/blah.aspx?var1=val1&var2=val2 For some reason, in our logs, we see a lot of requests to URLs of this format: http://our.site.com/https:/some.other.host.com/blah.aspx?var1=val1&var2=val2 We don't have any JavaScript that would request that URL; it only appears inside of a hyperlink. Is there some sort of known bot, browser plugin, bug, etc. that could be responsible for these requests being made?

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Root Domain Redirects Incorrectly To Https instead of to WWW

    - by Ari
    TL;DR - Why do visits to my website homepage work without "www", but not to specific pages on it? I recently moved my website (Zappable.com) to a new webhost, RedHat's OpenShift (a PAAS). It requires using Cname records to setup custom domains, something my domain name registar (1&1) does not support without a hosting plan. So instead I setup Cloudflare in-between my domain and web host, and setup a Cname record on it. I then pointed a 1&1 "www" sub-domain to CloudFlare, and then pointed my 1&1 root to "www" sub-domain. This works fine for visiting to my homepage, but for some reason it does not work when visiting a specific page without "www". Instead of adding "www", it goes to HTTPS, which is strange.

    Read the article

  • google custom search gives different result number for same query

    - by santiagozky
    We are using google custom search and we have found that often the totalResults iterates between two values, even for the same query. The different values can be slightly different or more than double. The parameters I am using look like this: https://www.googleapis.com/customsearch/v1? q=something cx=XXXXXXXXXX lr=lang_en siteSearch=www.mydomain.com start=1 fields=context%2Citems%28fileFormat%2CformattedUrl%2Clink%2Cpagemap%2Csnippet%2Ctitle%29%2Cqueries%2CsearchInformation%28searchTime%2CtotalResults%29%2Cspelling%2FcorrectedQuery key=YYYYYYYYYYYYYYY filter=0 This is problem because of calculating the number of result pages. How can I get the same results for the same query?

    Read the article

  • Should I start learning WPF?

    - by questron
    Hi, I've been studying C# for about 2 months so far, I have a few years experience programming however in VBA and VB6, but nothing too in depth (I've written simple tools, but nothing too advanced). I am very interested in WPF and I was wondering whether it would be too early for me to start learning how to use it? For reference, I am about 400 pages into 'Head First C#' and I've written a program that can quickly tag image files for moving into a predefined folder or to delete (Allows the user to pull images off of a camera memory card and sort VERY quickly). That's the most advanced app I've written.

    Read the article

  • adding noindex on pagination

    - by Damodar Bashyal
    I find few conflicts on people's reactions about adding noindex on paginations. What does pro webmasters has to say about this? I am planning to add noindex meta for all paginations with a hope to increase website value, so I would like some pro's feedback on this. e.g. here: http://w3tut.org/blog 3 posts' first few paragraphs are displayed and meta is taken from first post from that page, which will cause duplicate meta issue. Also, 3 posts in a page could be unrelated to each other as well. Is it a good idea to add noindex for these pages, so full article posts get more value?

    Read the article

  • How to remove HTML code from search result page content

    - by Jack Torris
    I have music website. There are 46 album pages and each page has different player and files. I just entered the one of album's URLs in a search engine. I found that Google is displaying player code in search result content. For example, enter this URL in Google and check the results. Each result displays a .mp3 file in content section. I see this: This page contains a demo of and documentation for the new jPlayer Playlist add-on, ... mp3:"http://www.jplayer.org/audio/mp3/Miaow-01-Tempered-song.mp3", ... I don't want Google to show the player code and mp3 files in search result. How can I hide audio files and player code from search engine? What would be the best solution for it?

    Read the article

  • What tools to use for efficient link building?

    - by Evgeny
    As most SEO experts keep saying, it is not just the content that you have - but also a hefty amount of quality incoming links to your content that is important -- these are the two ways to get to the top of the search results. The question is where do I find the incomnig links? One way I know is Google Blog Search, it can be used to find blogs with related information to your content and some allow to leave comments. The comments usually consist of your name, e-mail and website. If you put your keyword instead of your name, then the keyword turns into a link to your website. Unfortunately most blogs put rel=nofollow on such links, but some blogs don't do that. What other ways are there to find quality pages to put keywords links back to your website? Quality link usually means: located on a page with relevant content does not have a rel=nofollow in the <a has a relevant keyword as in <a href=websitekeyword</a the page with the link has high PageRank (3+) and TrustRank

    Read the article

  • Humerous Word 2010 "feature"?

    - by Michael Stephenson
    Im just sitting on the train to work and had a funny experience with word 2010 that I thought id share. Im writing a document and all of a sudden like usually happens the train gets a little bit bumpy.  Word decides it doesnt like this (maybe it prefers to fly?).  Anyway to show its dissatisfaction with the journey it starts adding new rows to my table in the document all by itself. 5 pages of rows later I still cant workout how to stop itso have to kill word. Thank you autosave

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • Methods of Geotargeting and optimising for location

    - by Switchfire
    This is somewhat an SEO question and somewhat a general web developer question. Our company website is pretty awful, I'm currently redesigning the new one, the problem is they have a directory called regions, which contains page for around 200 different locations around the country, all stuffed to the brim with keywords and useless things. Some of these pages work and the traffic's good enough to keep a few of them. Apart from creating a page for everyone again, is there another way of targeting all these locations without having to create a new page for each or is the a more dynamic way to do it? Any ideas or suggestions?

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • Announcement: New Tutorial - Using ADF Faces and ADF Controller with OEPE

    - by Juan Camilo Ruiz
    We are happy to announce the publication of our newest tutorial, that explores some of the latest features added in our OEPE 12c release for ADF Development. The tutorial walks you through the creation of an ADF application that uses the ADF Faces Rich Client components, in combination with the ADF Controler, ADF Model and JPA. By developing this tutorial you will work and understand various features added into OEPE 12c that are specific to ADF development such as: ADF taskflow editor Visual pageDefinition editor ADF integration with AppXRay Navigation across artifacts such as pages, pageDefinition, managed beans, etc. Property inspector for ADF Faces components. Stay tunned for more and exciting tutorials that explore this and much more OEPE features. And of course your feedback is always welcome!

    Read the article

  • Methods to Validate User Supplied Data

    - by clifgray
    I am working on a website where users record data from certain locations and they input an address to tag that location with a GPS coordinate. Pretty frequently those locations are tagged more than a mile away from the actual location and I am trying to implement a few ways to validate the data. Right now I am thinkiing of: having a tag of location pages for other users to say "incorrect location" so I can go one by one and fix it letting users with a decent amount of experience (reputation) edit the location GPS coordinates making the location be validated by a mod before it goes live and they make sure it is a good location Are these reasonable? I know the first will take a lot of my time and I would love some suggestions.

    Read the article

  • Mod_rewrite and urls that don't end with .php

    - by Kevin Laity
    I'm trying to use Mod_rewrite to hide the .php extensions of my pages. However, it refuses to do any rewriting unless the input url ends with .php, which makes that impossible. I can confirm that rewriting works fine as long as the url has .php at the end. RewriteRule a\.php b\.php Works, while RewriteRule a\.html b\.html does not. How can I turn off this behavior and allow it to rewrite all urls? I'm on a shared host so whatever I do has to be done from a .htaccess file. Update: There seems to be some confusion about what I'm asking here. The question is not about how to write the rule, the question is about server configuration. The rule I'm using is fine, I can test that locally. But the server I'm working with is somehow configured so that mod_rewrite doesn't attempt to rewrite anything that doesn't end with .php

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Firefox profile issue

    - by vooda gopal
    Is there any limit in the number of firefox profiles that can be created? My issue is I am currently doing selenium webdriver automation in linux for a device. There are 50 devices of same kind and framework will pickup a device depending on availability. I need to by pass unsigned ssl pages. I am using firefox 14. I have implemented following but it is not consistent. everytime a device is chosen adds cer of device to the cert file in firefox profile but I am getting sec_error_bad_signature very frequently. So I started recreating cert [delete and create if by opening firefox] file for every run. Now this is posing a problem if multiple devices are run at the same time. Hence I want to create separate firefox profile

    Read the article

  • What all items can I put on my resume without it looking tacky? [closed]

    - by Earlz
    I've been searching for work, and so far it's very hard for me to even get a call back. So, I'm looking at adding things to my resume. I know a resume doesn't need to be over 2 pages. I have the basics: Objective/personal info General skills (languages known, etc) Work experience Some things I'm considering adding to it: My college education (though I don't have a degree) Awards given for programming skills in high school (curriculum contests and AP test scores) Open source projects? Would any of these 3 items look tacky? And I only have about 1.5 years of work experience, but I've been programming since I was 13. Is there anything else I can add to my resume that would give me a better chance of getting my foot in the door?

    Read the article

  • Software requirements specification, please help!

    - by Nicholas Chow
    For a school project, I had to create a SRS for a "fictional" application. However they did not show us what it exactly entails, and were very vague with explanations. The SRS asked of us has to have at least 5 functional requirements, 5 non functional requirements and 1 constraint. Now I have tried my best to make one however I think there are still a lot of mistakes in it. Could you all please look at it and provide me with some feedback on which parts I can improve or just tell me which parts are plain out wrong and how to make it better? (The project has a maximum of 12 pages so it is a bit long, I will post it below. FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers.

    Read the article

  • Macbook Pro 8,2 Graphics switching - Ubuntu 12.04

    - by fgs
    I've been reading docs and various pages for a few hours now and can't seem to put all of the pieces together on this. Basically I am trying to get 12.04 installed on my MBP 8,2 with graphics card switching working in some way or another. My basic understanding is that I need to do an EFI boot install of ubuntu so that graphics card switching will work (due to the hardware design). From there I may be able to use one of the kernel modules for graphics switching: https://help.ubuntu.com/community/HybridGraphics That article isn't clear on whether I need to do an EFI install. I have also seen comments in posts here that say and EFI install works by default as long as you have refit installed. Overall, I'm quite lost as to the simplest way to proceed to get an install up and running with graphics switching. I don't mind using open source GFX drivers as long as the basics work. Any help towards a solution is greatly appreciated.

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >