Search Results

Search found 25009 results on 1001 pages for 'content encoding'.

Page 47/1001 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • My first Windows Phone 7 App: Getting SharePoint Content

    - by Jan Tielens
    Earlier this week at the Mix10 conference, Microsoft announced the developer story of the Windows Phone 7 Series. As expected, it’s all about Silverlight! For all the details I highly recommend to watch the recorded keynotes (day 1, day 2). Tonight I could resist trying to build my very first Windows Phone 7 application; the traditional Hello World thingy. Because the developer tools (Visual Studio 2010 and the free Visual Studio 2010 Express) have pretty nice templates, that wasn’t much of a challenge. So I tried to build something real: an application that can display SharePoint 2010 content, for example items from an announcements list. I head to work my way around some limitations because both SharePoint 2010 and the developer tools are still in beta and CTP, but finally I got it working! Because of the many workarounds, the code is not yet ready for publication, but I’ve created a small screencast so you can see the result. To be continued! :-) Windows Phone 7 POC: Getting SharePoint Data from Jan Tielens on Vimeo.

    Read the article

  • How do you author HDR content?

    - by Nathan Reed
    How do you make it easy for your artists to author content for an HDR renderer? What kinds of tools should you provide, and what workflows need to change, in going from LDR to HDR? Note that I'm not asking about the technical aspects of implementing an HDR renderer, but about best practices for creating materials and lighting in HDR. I've googled around a bit, but there doesn't seem to be much about this topic on the web. Can anyone point me to some good resources on this, or share their own experiences? Some specific points: Lighting - how can lighting artists pick HDR light colors? Do they have a standard LDR color picker and then a multiplier? Is the multiplier in gamma or linear space? Maybe instead of a multiplier it's a log-luminance? Or a physical brightness level, like the number of lumens? How will they know what multiplier/luminance/brightness is "correct" for a given light? Materials - how can texture artists make emissive color maps, such as neon signs, TV screens, skyboxes, etc? Can you paint one as a regular LDR (8-bit-per-channel) image and apply a multiplier (or log-luminance, etc.)? Are there cases where it's necessary to actually paint HDR images? If so, how do you go about this in Photoshop (or other software)?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • New Online Learning Library (OLL) content

    - by Irina
    Looking to brush up on OAM or OVD skills? Want some help with OIM? Well, have you checked our Online Learning Library (OLL) recently? OLL is a great way to pickup new skills in short blocks of time, and there is an enormous selection, on a diverse set of products. Every month these trainings get hundreds or thousands of hits. It would be worth your while to spend some time just poking around the nooks and crannies for items that interest you.A smattering of new OBEs and other content have recently become available, and if you haven't already, you might want to check them out: Identity Management: Business Scenarios Business and IT – Collaborative Access Review Sign Off and Closed Loop Identity Certification Oracle Identity Governance: End to End integration From Oracle Identity Manager to a Target Webservice Oracle Identity Manager: Configuring SOA Composite Oracle Identity Manager: Web Services Connector - Overview How to do a basic Oracle Virtual Directory (OVD) Setup? How to setup a simple Oracle Virtual Directory (OVD) Join? Installing Oracle Access Manager: Identity Server and WebPass  Also new is an Oracle University 5-day class you might want to investigate: Oracle Access Manager R2: Administration Essentials An OAM Advanced Administration class is in the works and should be available late summer or fall, so keep your calendar clear! Be sure to let us know in the Comments if there is a training you would find useful. Happy Trails :)

    Read the article

  • PHP File Serving Script: Unreliable Downloads?

    - by JGB146
    This post started as a question on ServerFault ( http://serverfault.com/questions/131156/user-receiving-partial-downloads ) but I determined that our php script was the culprit. So I'm issuing an updated question here about what I believe is the actual issue. I am using a php script to verify permissions and then serve up a file for users of my website to download. Most of the time, this works, but recently one user has been seeing problems with larger downloads. He is only getting ~80% of downloads for files that are 100MB in size. Also, all downloads from this script fail to report a filesize. Further, tests revealed that the same user COULD reliably download each of the failed files if given a direct link (at which point the filesize is reported). Here's the relevant snippet of code that we are using to serve the file: header("Content-type:$contenttype"); $len = filesize($filename); header("Content-Length: $len"); header("Content-Disposition: attachment; filename=".$title.".".$ext); readfile($filename); Note that $contenttype, $filename, $title, and $ext are all set correctly before we get here. These have been triple-checked. None of them are the problem. Also, $len does provide the correct filesize. While researching this issue, I came across this post: http://stackoverflow.com/questions/1334471/content-length-header-always-zero It seems that I am encountering the same issue. When I use the script, I get chunked encoding on the file and no size is set for content-length. I'm hypothesizing that something is going wrong on the large downloads, leading him to get a zero-length chunk before the end of the file. Here's what the headers look like for a direct request: http://www.grinderschool.com/videos/zfff5061b65ae00e8b21/KillsAids021.wmv GET /videos/zfff5061b65ae00e8b21/KillsAids021.wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.grinderschool.com/phpBB3/viewtopic.php?f=14&p=29468 Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 Pragma: no-cache Cache-Control: no-cache HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:57:41 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 Last-Modified: Sun, 04 Apr 2010 12:51:06 GMT Etag: "eb42d6-7d9b843-48368aa6dc280" Accept-Ranges: bytes Content-Length: 131708995 Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Content-Type: video/x-ms-wmv And here's what they look like for the request answered by my script: http://www.grinderschool.com/download_video_test.php?t=KillsAids021&format=wmv GET /download_video_test.php?t=KillsAids021&format=wmv HTTP/1.1 Host: www.grinderschool.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Cookie: style_cookie=printonly; phpbb3_7c544_u=2; phpbb3_7c544_k=44b832912e5f887d; phpbb3_7c544_sid=e8852df42e08cc1b2250300c2897f78f; __utma=174624884.2719561324781918700.1251850714.1270986325.1270989003.575; __utmz=174624884.1264524375.411.12.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=low%20stakes%20poker%20videos; phpbb3_cmviy_k=; phpbb3_cmviy_u=2; phpbb3_cmviy_sid=d8df5c0943863004ca40ef9c392d371d; __utmb=174624884.4.10.1270989003; __utmc=174624884 HTTP/1.1 200 OK Date: Sun, 11 Apr 2010 12:58:02 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 mod_auth_passthrough/2.1 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.11 Content-Disposition: attachment; filename=KillsAids021.wmv Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=10, max=30 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: video/x-ms-wmv So the question is...what can I do to make downloads from the script work properly? Again, for 99% of users, it works as is (though I find it annoying now that no filesize is reported and thus that no time estimate can be computed about the download).

    Read the article

  • x86 instruction encoding tables

    - by Cheery
    I'm in middle of rewriting my assembler. While at it I'm curious about implementing disassembly as well. I want to make it simple and compact, and there's concepts I can exploit while doing so. It is possible to determine rest of the x86 instruction encoding from opcode (maybe prefix bytes are required too, a bit). I know many people have written tables for doing it. I'm not interested about mnemonics but instruction encoding, because it is an actual hard problem there. For each opcode number I need to know: does this instruction contain modrm? how many immediate fields does this instruction have? what encoding does an immediate use? is the immediate in field an instruction pointer -relative address? what kind of registers does the modrm use for operand and register fields? sandpile.org has somewhat quite much what I'd need, but it's in format that isn't easy to parse. Before I start writing and validating those tables myself, I decided to write this question. Do you know about this kind of tables existing somewhere? In a form that doesn't require too much effort to parse.

    Read the article

  • Using both chunked transfer encoding and gzip

    - by RadiantHeart
    I recently started using gzip on my site and it worked like charm on all browsers except Opera which gives an error saying it could not decompress the content due to damaged data. From what I can gather from testing and googling it might be a problem with using both gzip and chunked transfer encoding. The fact that there is no error when requesting small files like css-files also points in that direction. Is this a known issue or is there something else that I havent thought about? Someone also mentioned that it could have something to do with sending a Content-Length header. Here is a simplified version of the most relevant part of my code: $contents = ob_get_contents(); ob_end_clean(); header('Content-Encoding: '.$encoding); print("\x1f\x8b\x08\x00\x00\x00\x00\x00"); $size = strlen($contents); $contents = gzcompress($contents, 9); $contents = substr($contents, 0, $size); print($contents); exit();

    Read the article

  • Need help with PHP URL encoding/decoding

    - by Kenan
    On one page I'm "masking"/encoding URL which is passed to another page, there I decode URL and start file delivering to user. I found some function for encoding/decoding URL's, but sometime encoded URL contains "+" or "/" and decoded link is broken. I must use "folder structure" for link, can not use QueryString! Here is encoding function: $urll = 'SomeUrl.zip'; $key = '123'; $result = ''; for($i=0; $i<strlen($urll); $i++) { $char = substr($urll, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } $result = urlencode(base64_encode($result)); echo '<a href="/user/download/'.$result.'/">PC</a>'; Here is decoding: $urll = 'segment_3'; //Don't worry for this one its CMS retrieving 3rd "folder" $key = '123'; $resultt = ''; $string = ''; $string = base64_decode(urldecode($urll)); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)-ord($keychar)); $resultt.=$char; } echo '<br />DEC: '. $resultt; So how to encode and decode url. Thanks

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • Get your content off Blogger.com

    - by Daniel Moth
    Due to blogger.com deprecating FTP users I've decided to move my blog. When I think of the content of a blog, 4 items come to mind: blog posts, comments, binary files that the blog posts linked to (e.g. images, ZIP files) and the CSS+structure of the blog. 1. Binaries The binary files you used in your blog posts are sitting on your own web space, so really blogger.com is not involved with that. Nothing for you to do at this stage, I'll come back to these in another post. 2. CSS and structure In the best case this exists as a separate CSS file on your web space (so no action for now) or in a worst case, like me, your CSS is embedded with the HTML. In the latter case, simply navigate from you dashboard to "Template" then "Edit HTML" and copy paste the contents of the box. Save that locally in a txt file and we'll come back to that in another post. 3. Blog posts and Comments The blog posts and comments exist in all the HTML files on your own web space. Parsing HTML files to extract that can be painful, so it is easier to download the XML files from blogger's servers that contain all your blog posts and comments. 3.1 Single XML file, but incomplete The obvious thing to do is go into your dashboard "Settings" and under the "Basic" tab look at the top next to "Blog Tools". There is a link there to "Export blog" which downloads an XML file with both comments and posts. The problem with that is that it only contains 200 comments - if you have more than that, you will lose the surplus. Also, this XML file has a lot of noise, compared to the better solution described next. (note that a tool I will refer to in a future post deals with either kind of XML file) 3.2 Multiple XML files First you need to find your blog ID. In case you don't know what that is, navigate to the "Template" as described in section 2 above. You will find references to the blog id in the HTML there, but you can also see it as part of the URL in your browser: blogger.com/template-edit.g?blogID=YOUR_NUMERIC_ID. Mine is 7 digits. You can now navigate to these URLs to download the XML for your posts and comments respectively: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=1 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=1 Note that you can only get 500 posts at a time and only 200 comments at a time. To get more than that you have to change the URL and download the next batch. To get you started, to get the XML for the next 500 posts and next 200 comments respectively you’d have to use these URLs: blogger.com/feeds/YOUR_NUMERIC_ID/posts/default?max-results=500&start-index=501 blogger.com/feeds/YOUR_NUMERIC_ID/comments/default?max-results=200&start-index=201 ...and so on and so forth. Keep all the XML files in the same folder on your local machine (with nothing else in there). 4. Validating the XML aka editing older blog posts The XML files you just downloaded really contain HTML fragments inside for all your blog posts. If you are like me, your blog posts did not conform to XHTML so passing them to an XML parser (which is what we will want to do) will result in the XML parser choking. So the next step is to fix that. This can be no work at all for you, or a huge time sink or just a couple hours of pain (which was my case). The process I followed was to attempt to load the XML files using XmlDocument.Load and wait for the exception to be thrown from my code. The exception would point to the exact offending line and column which would help me fix the issue. Rather than fix it in the XML itself, I would go back and edit the offending blog post and fix it there - recommended! Then I'd repeat the cycle until the XML could be loaded in the XmlDocument. To give you an idea, some of the issues I encountered are: extra or missing quotes in img and href elements, direct usage of chevrons instead of encoding them as &lt;, missing closing tags, mismatched nested pairs of elements and capitalization of html elements. For a full list of things that may go wrong see this. 5. Opportunity for other changes I also found a few posts that did not have a category assigned so I fixed those too. I took the further opportunity to create new categories and tag some of my blog posts with that. Note that I did not remove/change categories of existing posts, but only added.   In an another post we'll see how to use the XML files you stored in the local folder… Comments about this post welcome at the original blog.

    Read the article

  • How to add event receiver to SharePoint2010 content type programmatically

    - by ybbest
    Today , I’d like to show how to add event receiver to How to add event receiver to SharePoint2010 content type programmatically. 1. Create empty SharePoint Project and add a class called ItemContentTypeEventReceiver and make it inherit from SPItemEventReceiver and implement your logic as below public class ItemContentTypeEventReceiver : SPItemEventReceiver { private bool eventFiringEnabledStatus; public override void ItemAdded(SPItemEventProperties properties) { base.ItemAdded(properties); UpdateTitle(properties); } private void UpdateTitle(SPItemEventProperties properties) { SPListItem addedItem = properties.ListItem; string enteredTitle = addedItem["Title"] as string; addedItem["Title"] = enteredTitle + " Updated"; DisableItemEventsScope(); addedItem.Update(); EnableItemEventsScope(); } public override void ItemUpdated(SPItemEventProperties properties) { base.ItemUpdated(properties); UpdateTitle(properties); } private void DisableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = false; } private void EnableItemEventsScope() { eventFiringEnabledStatus = EventFiringEnabled; EventFiringEnabled = true; } } 2.Create a Site or Web(depending or your requirements) scoped feature and implement your feature event handler as below: public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPWeb web = GetFeatureWeb(properties); //http://karinebosch.wordpress.com/walkthroughs/event-receivers-theory/ string assemblyName =  System.Reflection.Assembly.GetExecutingAssembly().FullName; const string className = "YBBEST.AddEventReceiverToContentType.ItemContentTypeEventReceiver"; SPContentType contentType= web.ContentTypes["Item"]; AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemAdded, SPEventReceiverSynchronization.Asynchronous); AddEventReceiverToContentType(className, contentType, assemblyName, SPEventReceiverType.ItemUpdated, SPEventReceiverSynchronization.Asynchronous); contentType.Update(); } protected static void AddEventReceiverToContentType(string className, SPContentType contentType, string assemblyName, SPEventReceiverType eventReceiverType, SPEventReceiverSynchronization eventReceiverSynchronization) { if (className == null) throw new ArgumentNullException("className"); if (contentType == null) throw new ArgumentNullException("contentType"); if (assemblyName == null) throw new ArgumentNullException("assemblyName"); SPEventReceiverDefinition eventReceiver = contentType.EventReceivers.Add(); eventReceiver.Synchronization = eventReceiverSynchronization; eventReceiver.Type = eventReceiverType; eventReceiver.Assembly = assemblyName; eventReceiver.Class = className; eventReceiver.Update(); } 3.Deploy your solution and now you have a event receiver that attached to the Item contentType. You can download the complete source code here.You can also check how to add event receiver to a list using SharePoint event receiver item in Visual Studio2010 in my previous blog.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Loading SpriteFont through a different class than Game.cs

    - by MintyAnt
    I am trying to load up a single SpriteFont to print some debug information. In our current game, we load up both Textures and Music through a ResourceManager. They are both loaded with a filestream, and thus do not require Content.Load SoundEffect soundEffect = SoundEffect.FromStream( fs ); Since this ResourceManager does not inherit from Game or is like Game.cs, I cannot use the usual method: SpriteFont spriteFont = Content.Load<SpriteFont>(resource.Key.Item2); Anyone have any idea how I can either: -Load the SpriteFont a different way -Create my own Contentmanager

    Read the article

  • Oracle Tutor: Create Accessible Content for the Disabled Community

    - by emily.chorba(at)oracle.com
    For many reasons--legal, business, and ethical--Oracle recognizes the need for its applications, and our customers' and partners' products built with our tools, to be usable by the disabled community. The following features of Tutor Author and Publisher software facilitate the creation of accessible HTML content for the disabled community.TablesThe following formatting guidelines will ensure that Tutor documents containing tables will be accessible once they are converted to HTML.• Determine whether a table is a "data table" or whether you are using a table simply for formatting. If it's a data table, you must use a heading for each column, and you should format this heading row as "table heading" style and select Table > Heading Rows Repeat.• For non data tables, it is not necessary to include a heading row.GraphicsTo create accessible graphics, add a caption to the graphic. In Microsoft Office 2000 and greater, right-click on the graphic and select Format Picture > Web (tab) > Alternative Text or select the graphic then Format > Picture > Web (tab) Alternative Text. Enter the appropriate information in the dialog box.When a document containing a graphic with alternative text is converted to HTML by Tutor, the HTML document will contain the appropriate accessibility information.Javascript elementsThe tabbed format and other javascript elements in the HTML version of the Tutor documents may not be accessible to all users. A link to an accessible/printable version of the document is available in the upper right corner of all Tutor documents.Repetitive dataIf repetitive data such as the distribution section and the ownership section are causing accessibility issues with your Tutor documents, you can insert a bookmark in the appropriate location of the document, and, when the document is converted to HTML, the bookmark will be converted to an A NAME reference (also known as an internal link). With this reference, you can create a link in Header.txt that can be prepended to each Tutor document that allows the user to bypass repetitive sections. Tutor and Oracle ApplicationsRegarding accessibility, please check Oracle's website on accessibility http://www.oracle.com/accessibility/ to find out what version of E-Business Suite is certified to work with screen readers. Oracle Tutor 11.5.6A and greater works with screen readers such as JAWS.There is no certification between Oracle Tutor and Oracle Applications because there are no related dependencies. It doesn't matter which version of the Oracle Applications you are running. Therefore, it is possible to use Oracle Tutor with earlier versions of Oracle Applications.Oracle Business Process Converter and Oracle ApplicationsOracle Business Process Converter (OBPC) converts Visio, XPDL, and Tutor models to Oracle Business Process Architect and Oracle Business Process Management. The OBPC is one of a collection of plugins to Oracle JDeveloper. Please see the VPAT as the same considerations apply.Learn MoreFor more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Single paging Meta Tag Duplication [duplicate]

    - by Dominic Reigns
    This question already has an answer here: Should paginated content have unique meta tags? 1 answer How to make Google understand about single paging content issue ? A single page has got above 100 paging ? 1-10 of 199 Go to Page 12345678910 ... like this as mentioned above, Do I need to put fresh meta tag on each page ??? Help me out from this issue ?

    Read the article

  • Content Based Routing with BRE and ESB

    - by Christopher House
    I've been working with BizTalk 2009 and the ESB toolkit for the past couple of days.  This is actually my first exposure to ESB and so far I'm pleased with how easy it is to work with. Initially we had planned to use UDDI for storing endpoint information.  However after discussing this with my client, we opted to look at BRE instead of UDDI since we're already storing transforms in BRE.  Fortunately making the change to BRE from UDDI was quite simple.  This solution of course has the added advantage of not needing to go through the convoluted process of registering our endpoints in UDDI. The first thing to remember if you want to do content based routing with BRE and ESB is that the pipleines included in the ESB toolkit don't include disassembler components.  This means that you'll need to first create a custom recieve pipeline with the necessary disassembler for your message type as well as the ESB components, itinerary selector and dispather. Next you need to create a BRE policy.  The ESB.ContextInfo vocabulary contains vocabulary links for the various items in the ESB context dictionary.  In this vocabulary, you'll find an item called Context Message Type, use this as the left hand side of your condition.  Set the right hand side to your message type, something like http://your.message.namespace/#yourrootelement.  Now find the ESB.EndPointInfo vocabulary.  This contains links to all the properties related to endpoint information.  Use the various set operators in your rule's action to configure your endpoint. In the example above, I'm using the WCF-SQL adapter. Now that the hard work is out of the way, you just need to configure the resolver in your itinerary. Nothing complicated here.  Just select BRE as your resolver implementation and select your policy from the drop-down list.  Note that when you select a policy, the Version field will be automatically filled in with the version of your policy.  If you leave this as-is, the resolver will always use that policy version.  Alternatively, you can clear the version number and the resolver will use the highest deployed version.

    Read the article

  • Managing a Web Portal

    - by A competent translator
    After skimming almost all questions tagged "books" here at Pro Webmasters to find a book on managing web portals I found references to general web mastering. Are there books on managing and curating content of public web portals , i.e. a governmental portal for a given department or agency ? I need a reference on the day-to-day management and best practices of a portal and its content. Also welcomed are references on managing multilingual portals.

    Read the article

  • ??????????????????????????R-online"The Shop"?????

    - by mamoru.kobayashi
    ???????????????????????RMAP ?????????EC?????????????????????????Oracle Universal Content Management???????????????? ???????????????????2010?4?20?????? ??????????R-online "The Shop"????????? ·???????????????·????????????????????????????????????????????????????????????2009?4????????RMAP - RENOWN Make Again Plan????????????????????????????????????????????????????????EC???????????????? ·????????????????????????????????????????????????????????????EC????????????????????Oracle Universal Content Management??Web ???????????(CMS)??????????? ????????????????

    Read the article

  • CNAME subdomain of subdomain with content on first subdomain

    - by BandonRandon
    Okay so maybe that title is a bit confusing but here is what I'm trying to do: I have a subdomain with content at my.example.com I want to set up something like sub.my.example.com This does not seem to work if there is content on my.example.com However if there is not content on the first subdomain it seems that it's okay to CNAME the record to example2.org This is my first real experiance with CNAME so I'm wondering about common errors to check for. my.example.com is an A record and sub.my.example.com is a CNAME EDIT I am managing the DNS though Web Host Manger (WHM) Dns records. The content is at http://vbx.knowconceptdw.net and I'm trying to set a voicemail CNAME to point to http://api.twillio.com at http://voicemail.vbx.knowconceptdw.net

    Read the article

  • Apache2 - mod_expire and mod_rewrite not working in httpd.conf - serving content from tomcat

    - by Ankit Agrawal
    Hi, I am using apache2 server running on debian which forwards all the http request to tomcat installed on same machine. I have two files under my /etc/apache2/ folder apache2.conf and httpd.conf I modified httpd.conf file to look like following. # forward all http request on port 80 to tomcat ProxyPass / ajp://127.0.0.1:8009/ ProxyPassReverse / ajp://127.0.0.1:8009/ # gzip text content AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/javascript AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # Turn on Expires and mark all static content to expire in a week # unset last modified and ETag ExpiresActive On ExpiresDefault A0 <FilesMatch "\.(jpg|jpeg|png|gif|js|css|ico)$" ExpiresDefault A604800 Header unset Last-Modified Header unset ETag FileETag None Header append Cache-Control "max-age=604800, public" </FilesMatch RewriteEngine On # rewrite all www.example.com/content/XXX-01.js and YYY-01.css files to XXX.js and YYY.css RewriteRule ^content/(js|css)/([a-z]+)-([0-9]+)\.(js|css)$ /content/$1/$2.$4 # remove all query parameters from URL after we are done with it RewriteCond %{THE_REQUEST} ^GET\ /.*\;.*\ HTTP/ RewriteCond %{QUERY_STRING} !^$ RewriteRule .* http://example.com%{REQUEST_URI}? [R=301,L] # rewrite all www.example.com to example.com RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] I want to achieve following. forward all traffic to tomcat GZIP all the text content. Put 1 week expiry header to all static files and unset ETag/last modified header. rewrite all js and css file to certain format. remove all the query parameters from URL forward all www.example.com to example.com The problem is only 1 and 2 are working. I tried a lot with many combinations but the expire and rewrite rule (3-6) do not work at all. I also tried moving these rules to apache2.conf and .htaccess files but it didn't work either. It does not give any error but these rules are simple ignored. expires and rewrite modules are ENABLED. Please let me know what should I do to fix this. 1. Do I need to add something else in httpd.conf file (like Options +FollowSymLink) or something else? 2. Do I need to add something in apache2.conf file? 3. Do I need to move these rules to .htaccess file? If yes, what should I write in that file and where should I keep that file? in /etc/apache2/ folder or /var/www/ folder? 4. Any other info to make this work? Thanks, Ankit

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >