Search Results

Search found 71496 results on 2860 pages for 'http content length'.

Page 10/2860 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • assigning values to shader parameters in the XNA content pipeline

    - by Nick
    I have tried creating a simple content processor that assigns the custom effect I created to models instead of the default BasicEffect. [ContentProcessor(DisplayName = "Shadow Mapping Model")] public class ShadowMappingModelProcessor : ModelProcessor { protected override MaterialContent ConvertMaterial(MaterialContent material, ContentProcessorContext context) { EffectMaterialContent shadowMappingMaterial = new EffectMaterialContent(); shadowMappingMaterial.Effect = new ExternalReference<EffectContent>("Effects/MultipassShadowMapping.fx"); return context.Convert<MaterialContent, MaterialContent>(shadowMappingMaterial, typeof(MaterialProcessor).Name); } } This works, but when I go to draw a model in a game, the effect has no material properties assigned. How would I go about assigning, say, my DiffuseColor or SpecularColor shader parameter to white or (better) can I assign it to some value specified by the artist in the model? (I think this may have something to do with the OpaqueDataDictionary but I am confused on how to use it--the content pipeline has always been a black box to me.)

    Read the article

  • Will Google penalize subdomains if content is nearly identical

    - by John Pham
    I have created a subdomain for a town in San Diego that's ranking very well for it's keywords: http://carmelvalleymortgage.loanrebateinc.com/ I want to replicate this subdomain's content for another town in San Diego: http://sandiego.mortgage.loanrebateinc.com/ I will edit the text, tags, image files specific to each town, otherwise the verbiage will be identical. Question: Will Google penalize the main site? Will Google penalize the subdomains and list the content as spam? If yes to either 1 or 2, what strategies can I implement to prevent this? I'm using WordPress.

    Read the article

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

  • Content theft - Where can I go from here?

    - by Toby
    I am the webmaster of a very successful blog in a fairly small niche. Recently our success has started to bite us with people copying posts on the site without consent and trying to pass them off as their own work. Most sites stop as soon as you contact them but there is one in particular that is a blogger site which persists in passing off our content as their own. Every post we find we report to Google and they have been fairly good at taking the posts offline within a day or two but this isn't good enough or a long term solution. Given the nature of what is being blogged about after 24 hours the post is pretty much useless so I need some way to just stop them from taking our content? Any ideas? I don't want to go down the route of using a third party for people to get our RSS feed but I guess that is one option?

    Read the article

  • Duplicate content in Top Level Domain and country specific website

    - by Ando
    I have myproduct.com which is my master product page. For UK I also own myproduct.co.uk which is a copy of myproduct.com with some localized content: landing page, promotions, prices, and specific tags. But there is also duplicate content: myproduct.com/FAQs/ is the same as myproduct.co.uk/FAQs/ I don't want to do a redirect from myproduct.co.uk/FAQs/ to myproduct.com/FAQs/ as I don't want people to leave the localized website. The myproduct.com/FAQs/ is my "go-to" FAQ page and it's the most likely to be up to date - so I want this page to be indexed my search engines, where as I don't care about myproduct.co.uk/FAQs/ being indexed (unless indexing this page would increase my page rank :) ). What to do now to be SEO friendly & SEO optimal? Stop indexing of myproduct.co.uk/FAQs/ via robots.txt? Do some rel="alternate" hreflang="x" configuring on both /FAQs/ page? Something else?

    Read the article

  • Content Manager Assistant PSVita Linux Does NOT Recognize USB Port

    - by Nicky Bailuc
    I have an external copy of Windows 7 alongside Quantal and I installed Content Manager Assistant on it. I was able to start the program successfully by finding the Executable file of the program in the program folder in Windows and run it in Wine, however Wine didn't recognize my PSVita that was connected through one of my USB ports. Is there any way to configure WINE to properly recognize the Vita? Content Manager Assistant is a Windows and Mac only program that allows you to transfer files between your PC and PSVita, kinda like iTunes for iPod.

    Read the article

  • International TLD's vs. duplicate content

    - by Litso
    Hey all, I currently work at a pretty big website that has visitors from around the globe. My job is to help out on the SEO, and one thing we've been discussing lately is the use of international TLD's. The ones we use range between: (partly) translated websites like .es and .de that serve most of the content in the country's language non-translated (english) websites for non-english languages (due to a lack of translations) like .ro and .cz english websites for english speaking countries with localized TLD's (.co.nz, .co.uk) On one hand I really have the feeling this is causing a lot of duplicate content, especially for the last two categories of TLD's. On the other hand though it seems a lot like country-specific TLD's tend to score a lot better in that country's Google. Would it be advisable to keep on using these domains, or should we canonicalize them all to the .com version?

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

  • Plug-in or framework recommendation for showing content preview fly-over in CMS

    - by Michael Huang
    The requirement is to have either front end plug-in or back end processor that generates the preview of contents (such as images, videos, PDF files, HTML pages) in a preview popup when user mouse hovers the content. I did some research on this, seems that there are assorted jquery plugins for each type of files, but what we are looking for is a framework that handles all types of file previews. Ideally, we want to generate preview images on the backend, considering the cost of retrieving content on front end. I did find some open source or proprietary CMS that provides this feature, but usually they are shipped as one suite and the API for file preview might not be open. Is there any java or jquery framework that handles file preview? Thanks a lot

    Read the article

  • Changing the content of a website completely, and SEO

    - by Sercan
    I have a blog running since like 10 months, which have 300 organic unique visitors daily, and now I will establish an eCommerce website on that domain. That means, I will delete all the content related to the blog. And publish new pages related to eCommerce using a different script. Content of the blog and the topic of eCommerce are also quite different. How should i do this change in terms of SEO? What should I expect in terms of Search Rankings, organic hit?

    Read the article

  • How can I use a Windows 2003 server as a HTTP proxy?

    - by Will
    I'd like to set up an HTTP proxy on a windows 2003 server so that I can access blocked websites such as YouTube from behind a corporate firewall (DAMN THE MAN!). I've never done this before, so I'm not even sure if the picture I have in my head is valid or possible. So I'm stuck behind a firewall that blocks sites that I need to access occasionally but that are blocked because of abuse by slackers. I've got a Windows 2003 server hosted out on the internet (i.e., outside of this odious firewall). I know I can configure my browser to use a proxy for my HTTP traffic, so why not use my server? What I'd like to know is: Is my concept valid? Can this be done, and will it work? How do I configure my server to act as a proxy? What applications may I have to install? Free is fine but don't leave out commercial software TIA

    Read the article

  • Are HTTP requests cached? [closed]

    - by nischayn22
    Many HTTP requests are sent repeatedly by browsers on almost every page load, such as requesting the jQuery .js file etc. Since these are already used on too many sites doesn't modern browsers keep a cache for this? I am thinking of a system where the browser has a cached copy of the .js file used very very frequently. On a new request for the .js file, it sends the server a request for a hash of the .js file (provided the server can reply to that) and compares the returned hash with the cached copy's hash... rest is intuitive.

    Read the article

  • seo value of duplicating content externally

    - by Don
    I run a website that includes a blog which was hand-coded by myself and is hosted on the same domain. My partner in this endeavour thinks it would be a good idea to open up a blogger/wordpress blog and duplicate the on-site blog on this off-site blog. AFAIK the main reason for doing this is the SEO benefits of the inbound links that this off-site blog will create. I think this is a bad idea, because: Effectively what we're doing is creating a (very small scale) link farm We're more likely to be punished than rewarded (in SEO terms) for duplicating our content across domains This introduces a problem of synchronising our content across domains. For example, if a blog post is edited on the on-site blog, then ideally the off-site blog should be similarly updated. I know very little about SEO, so would be interested to hear what more informed readers have to say.

    Read the article

  • Should HTTP Verbs Be Used Semantically?

    - by Xophmeister
    If I'm making a web application which integrates with a server-side backend, would it be considered best practice to use HTTP methods semantically? That is, for example, if I'm fetching data (e.g., to populate a menu, etc.), I would use GET, but to update data (e.g., save a record), I would use POST. (I realise there are other methods that may be even more appropriate, but we need to consider browser support.) I can see the benefits of this in the sense that it's effectively a RESTful API, but at a slightly increased development cost. In my previous projects, I've POST'd everything: Is it worth switching to a RESTful mindset simply for the sake of best practice?

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. [Read More] 

    Read the article

  • Parser, send an argument/receive xml (receive already done/ send not)

    - by bruno
    public List<Afood> getFoodFromCat(String cat) { String resultado = ""; List<Afood> list = new ArrayList<Afood>(); try { URL xpto = new URL("http://10.0.2.2/webservice/nutrituga/get_food_by_cat.php"); HttpURLConnection conn; conn = (HttpURLConnection) xpto.openConnection(); conn.setDoInput(true); conn.connect(); InputStream is = conn.getInputStream(); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); try { DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); NodeList nl = doc.getElementsByTagName("item"); // resultado = String.valueOf(nl.getLength()); for (int i = 0; i < nl.getLength(); i++) { Node n = nl.item(i); Node childNode = n.getFirstChild(); while (childNode != null) { if (childNode.getNodeType() == Node.ELEMENT_NODE) { if (childNode.getNodeName().equalsIgnoreCase( "NAME_FOOD")) { Node valor = childNode.getFirstChild(); // resultado = resultado + valor.getNodeValue(); list.add(new Afood(valor.getNodeValue(), "", (int) Math.round(Math.random()), 1, 1, 1, 1, 1, 1)); } } childNode = childNode.getNextSibling(); } } return list; } catch (ParserConfigurationException e1) { e1.printStackTrace(); } catch (SAXException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return list; } I have this function that receives a xml and copy it to the list. This is well implemented. What i want do to know, is to send a category (that i receive like an argument of the function) and receive only the food from that category. The server is ready to receive the category and to send the food from that category. What do i have to do to send the category and receive the correct xml?

    Read the article

  • When to use http status code 404

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present.

    Read the article

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

  • JavaScript: count minimal length of characters in text, ignoring special codes inside

    - by ilnur777
    I want to ignore counting the length of characters in the text if there are special codes inside in textarea. I mean not to count the special codes characters in the text. I use special codes to define inputing smileys in the text. I want to count only the length of the text ignoring special code. Here is my approximate code I tried to write, but can't let it work: // smileys // ======= function smileys(){ var smile = new Array(); smile[0] = "[:rolleyes:]"; smile[1] = "[:D]"; smile[2] = "[:blink:]"; smile[3] = "[:unsure:]"; smile[4] = "[8)]"; smile[5] = "[:-x]"; return(smile); } // symbols length limitation // ========================= function minSymbols(field){ var get_smile = smileys(); var text = field.value; for(var i=0; i<get_smile.length; i++){ for(var j=0; j<(text.length); j++){ if(get_smile[i]==text[j]){ text = field.value.replace(get_smile[i],""); } } } if(text.length < 50){ document.getElementById("saveB").disabled=true; } else { document.getElementById("saveB").disabled=false; } } How the script should be in order to let it work? Thank you!

    Read the article

  • Counting string length in javascript and Ruby on Rails

    - by williamjones
    I've got a text area on a web site that should be limited in length. I'm allowing users to enter 255 characters, and am enforcing that limit with a Rails validation: validates_length_of :body, :maximum => 255 At the same time, I added a javascript char counter like you see on Twitter, to give feedback to the user on how many characters he has already used, and to disable the submit button when over length, and am getting that length in Javascript with a call like this: element.length Lastly, to enforce data integrity, in my Postgres database, I have created this field as a varchar(255) as a last line of defense. Unfortunately, these methods of counting characters do not appear to be directly compatible. Javascript counts the best, in that it counts what users consider as number of characters where everything is a single character. Once the submission hits Rails, however, all of the carriage returns have been converted to \r\n, now taking up 2 characters worth of space, which makes a close call fail Rails validations. Even if I were to handcode a different length validation in Rails, it would still fail when it hits the database I think, though I haven't confirmed this yet. What's the best way for me to make all this work the way the user would want? Best Solution: an approach that would enable me to meet user expectations, where each character of any type is only one character. If this means increasing the length of the varchar database field, a user should not be able to sneakily send a hand-crafted post that creates a row with more than 255 letters. Somewhat Acceptable Solution: a javascript change that enables the user to see the real character count, such that hitting return increments the counter 2 characters at a time, while properly handling all symbols that might have these strange behaviors.

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • What is the HTTP_PROFILE browser header and how is it used?

    - by Tom
    I've just come across the HTTP_PROFILE header that seems to be used by mobile browsers to point to an .xml document describing the device's capabilities. Doing a Google search doesn't turn up any definitive resources on what this is and how it should be used, can anyone point me to something along the lines of a spec/W3C standard?

    Read the article

  • What is duplicate content and how can I avoid being penalized for it on my site?

    - by danlefree
    This is a general, community wiki question regarding duplicate content. If your question was closed as a duplicate of this question and you feel that the information provided here does not provide a sufficient answer, please open a discussion on Pro Webmasters Meta. What does Google consider to be duplicate content? Will the way I am presenting my content result in a duplicate content penalty? How can I avoid having my site's content treated as duplicate content?

    Read the article

  • Pub banter - content strategy at the ballot box?

    - by Roger Hart
    Last night, I was challenged to explain (and defend) content strategy. Three sheets to the wind after a pub quiz, this is no simple task, but I hope I acquitted myself passably. I say "hope" because there was a really interesting question I couldn't answer to my own satisfaction. I wonder if any of you folks out there in the ethereal internet hive-mind can help me out? A friend - a rather concrete thinker who mathematically models complex biological systems for a living - pointed out that my examples were largely routed in business-to-business web sales and support. He challenged me with: Say you've got a political website, so your goal is to have somebody read it and vote for you - how do you measure the effectiveness of that content? Well, you would. umm. Oh dear. I guess what we're talking about here, to yank it back to my present comfort zone, is a sales process where your point of conversion is off the site. The political example is perhaps a little below the belt, since what you can and can't do, and what data you can and can't collect is so restricted. You can't throw up a "How did you hear about this election?" questionnaire in the polling booth. Exit polls don't pull in your browsing history and site session information. Not everyone fatuously tweets and geo-tags each moment of their lives. Oh, and folks lie. The business example might be easier to attack. You could have, say, a site for a farm shop that only did over the counter sales. Either way, it's tricky. I fell back on some of the work I've done usability testing and benchmarking documentation, and suggested similar, quick and dirty, small sample qualitative UX trials. I'm not wholly sure that was right. Any thoughts? How might we measure and curate for this kind of discontinuous conversion?

    Read the article

  • XNA extending the existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >