Search Results

Search found 8953 results on 359 pages for 'human resources'.

Page 201/359 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Paper-free Customer Engagement

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Appropriate repost from our friends at the AIIM blog: Digital Landfill -- John Mancini, supporting our mission of enabling customer engagement through better technology choices.  ---------- My wife didn't even give me a card for #wpfd - and they say husbands are bad at remembering anniversaries Well, today is the third World Paper Free Day.  I just got off the Tweet Jam, and there was a host of ideas for getting rid of -- or at least reducing -- paper. When we first started talking about "paper-free" most of the reasons raised to pursue this direction were "green" reasons.  I'm glad to see that the thinking has moved on to questions about how getting rid of paper and digitizing processes helps improve customer engagement.  And the bottom line.  And process responsiveness.  Not that the "green" reasons have gone away, but it's nice to see a maturation in the BUSINESS reasons to get rid of paper. Our World Paper Free Handbook (do not, do not, do not print it!) looks at how less paper in the workplace delivers significant benefits. Key findings show eliminating paper from processes can improve the responsiveness of customer service by 300 percent. Removing paper from business processes and moving content to PCs and tablets has the added advantage of helping companies adopt mobile-enable processes and eliminate elapsed time, lost forms, poor data and re-keying. To effectively mobile-enable processes and reduce reliance on paper, data should be captured as close to the point of origination as possible, which makes information easily available to whomever needs it, wherever they are, in the shortest time possible. This handbook summarizes the value of automating manual, paper-based processes. It then goes a step beyond to provide actionable steps that will set you on the path to productivity, profitability, and, yes, less paper.  Get your copy today and send the link around to your peers and colleagues.  Here's the link; please share it! http://www.aiim.org/Research-and-Publications/Research/AIIM-White-Papers/WPFD-Revolution-Handbook And don't miss out on the real world discussions about increasing engagement with WebCenter in new webinars being offered over the next couple of weeks:  October 30, 2012:  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter November 1, 2012: WebCenter Content for Applications: Streamline Processes with Oracle WebCenter Content Management for Human Resources Applications Available On-Demand:  Using Oracle WebCenter to Content-Enable Your Business Applications

    Read the article

  • E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

    - by Giri Mandalika
    Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view. THREADS parameter The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention. eg., Set a value for THREADS parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'THREADS'; COMMIT; (I am not aware of any maximum value for THREADS parameter) CHUNK_SIZE parameter The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time. When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted. It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found. eg., Set a value for CHUNK_SIZE parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'CHUNK_SIZE'; COMMIT; CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000. CHUNK SHUFFLE parameter By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example). It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured. eg., Set chunk shuffling as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = 'Y' WHERE PARAMETER_NAME = 'CHUNK SHUFFLE'; COMMIT; Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.     My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • The Social Content Conundrum

    - by Mike Stiles
    Here’s the social content conundrum: people who are not entertainers are being asked to entertain. Despite a world of skilled MBAs, marketing savants, technological innovators, analysts, social strategists and consultants, every development in social for brands keeps boomeranging right back to the same unavoidable truth. Success hinges on having content creators who know how to entertain the target audience. You can’t make this all about business-processes. You can’t make this all about technology, though data is critical and helps inform content. This is about having human beings who know the audience, know what they’d love to see, and can create the magic that will draw and hold them. Since showing up in the News Feed is critical for exposition and engagement, and since social ads primarily serve to amplify content that’s performing well, I’m comfortable saying content creators are becoming exponentially recruited and valued. They will no longer be commodities. They’ll be your stars. Social has fundamentally changed the relationship between brand and consumer. No longer can the customer be told to sit down, shut up, and listen to our ads. It’s now all about what consumers are willing to watch or read. Their patience for subjecting themselves to material they aren’t interested in is waning. Therefore, brands must now be producers of entertainment and information content, not merely placers of ads within someone else’s content. Social has given you a huge stage, with an audience sitting out there waiting to see what you’re going to do. What are you putting on that stage? For most corporate environments, entertaining is alien. It’s risky and subjective. Most operate around two foundational principles: control and fear. To entertain and inform with branded content, some control has to go. You control the product. Past that, control is being transferred into the hands of the consumer. The “fear first” culture also has to yield. If you strive to never make waves, you will move absolutely nothing. Because most corporations don’t house entertainers, they must be found then trusted. They’re usually a little weird. The ideas they’ll bring may seem “out there.” But like any business professional, they’ve gone through the training and experiences that make them uniquely good at what they do, even if you don’t quite understand them. It’s okay. It’s what the audience thinks that matters. Get it right, and you’ll be generating one ambassador after another who’s proud to be identified with the brand and will regularly consume and share your content. Entertainment entities are able to shape our culture and succeed beyond their wildest dreams by being beholden to one thing…what the public likes and wants. When brands put the same emphasis on crowd-pleasing content, they too will enjoy brand fame the likes of which they’ve never seen. The stage is yours. Now get out there and go for that applause.

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Optimal Data Structure for our own API

    - by vermiculus
    I'm in the early stages of writing an Emacs major mode for the Stack Exchange network; if you use Emacs regularly, this will benefit you in the end. In order to minimize the number of calls made to Stack Exchange's API (capped at 10000 per IP per day) and to just be a generally responsible citizen, I want to cache the information I receive from the network and store it in memory, waiting to be accessed again. I'm really stuck as to what data structure to store this information in. Obviously, it is going to be a list. However, as with any data structure, the choice must be determined by what data is being stored and what how it will be accessed. What, I would like to be able to store all of this information in a single symbol such as stack-api/cache. So, without further ado, stack-api/cache is a list of conses keyed by last update: `(<csite> <csite> <csite>) where <csite> would be (1362501715 . <site>) At this point, all we've done is define a simple association list. Of course, we must go deeper. Each <site> is a list of the API parameter (unique) followed by a list questions: `("codereview" <cquestion> <cquestion> <cquestion>) Each <cquestion> is, you guessed it, a cons of questions with their last update time: `(1362501715 <question>) (1362501720 . <question>) <question> is a cons of a question structure and a list of answers (again, consed with their last update time): `(<question-structure> <canswer> <canswer> <canswer> and ` `(1362501715 . <answer-structure>) This data structure is likely most accurately described as a tree, but I don't know if there's a better way to do this considering the language, Emacs Lisp (which isn't all that different from the Lisp you know and love at all). The explicit conses are likely unnecessary, but it helps my brain wrap around it better. I'm pretty sure a <csite>, for example, would just turn into (<epoch-time> <api-param> <cquestion> <cquestion> ...) Concerns: Does storing data in a potentially huge structure like this have any performance trade-offs for the system? I would like to avoid storing extraneous data, but I've done what I could and I don't think the dataset is that large in the first place (for normal use) since it's all just human-readable text in reasonable proportion. (I'm planning on culling old data using the times at the head of the list; each inherits its last-update time from its children and so-on down the tree. To what extent this cull should take place: I'm not sure.) Does storing data like this have any performance trade-offs for that which must use it? That is, will set and retrieve operations suffer from the size of the list? Do you have any other suggestions as to what a better structure might look like?

    Read the article

  • Getting 2D Platformer entity collision Response Correct (side-to-side + jumping/landing on heads)

    - by jbrennan
    I've been working on a 2D (tile based) 2D platformer for iOS and I've got basic entity collision detection working, but there's just something not right about it and I can't quite figure out how to solve it. There are 2 forms of collision between player entities as I can tell, either the two players (human controlled) are hitting each other side-to-side (i. e. pushing against one another), or one player has jumped on the head of the other player (naturally, if I wanted to expand this to player vs enemy, the effects would be different, but the types of collisions would be identical, just the reaction should be a little different). In my code I believe I've got the side-to-side code working: If two entities press against one another, then they are both moved back on either side of the intersection rectangle so that they are just pushing on each other. I also have the "landed on the other player's head" part working. The real problem is, if the two players are currently pushing up against each other, and one player jumps, then at one point as they're jumping, the height-difference threshold that counts as a "land on head" is passed and then it registers as a jump. As a life-long player of 2D Mario Bros style games, this feels incorrect to me, but I can't quite figure out how to solve it. My code: (it's really Objective-C but I've put it in pseudo C-style code just to be simpler for non ObjC readers) void checkCollisions() { // For each entity in the scene, compare it with all other entities (but not with one it's already compared against) for (int i = 0; i < _allGameObjects.count(); i++) { // GameObject is an Entity GEGameObject *firstGameObject = _allGameObjects.objectAtIndex(i); // Don't check against yourself or any previous entity for (int j = i+1; j < _allGameObjects.count(); j++) { GEGameObject *secondGameObject = _allGameObjects.objectAtIndex(j); // Get the collision bounds for both entities, then see if they intersect // CGRect is a C-struct with an origin Point (x, y) and a Size (w, h) CGRect firstRect = firstGameObject.collisionBounds(); CGRect secondRect = secondGameObject.collisionBounds(); // Collision of any sort if (CGRectIntersectsRect(firstRect, secondRect)) { //////////////////////////////// // // // Check for jumping first (???) // // //////////////////////////////// if (firstRect.origin.y > (secondRect.origin.y + (secondRect.size.height * 0.7))) { // the top entity could be pretty far down/in to the bottom entity.... firstGameObject.didLandOnEntity(secondGameObject); } else if (secondRect.origin.y > (firstRect.origin.y + (firstRect.size.height * 0.7))) { // second entity was actually on top.... secondGameObject.didLandOnEntity.(firstGameObject); } else if (firstRect.origin.x > secondRect.origin.x && firstRect.origin.x < (secondRect.origin.x + secondRect.size.width)) { // Hit from the RIGHT CGRect intersection = CGRectIntersection(firstRect, secondRect); // The NUDGE just offsets either object back to the left or right // After the nudging, they are exactly pressing against each other with no intersection firstGameObject.nudgeToRightOfIntersection(intersection); secondGameObject.nudgeToLeftOfIntersection(intersection); } else if ((firstRect.origin.x + firstRect.size.width) > secondRect.origin.x) { // hit from the LEFT CGRect intersection = CGRectIntersection(firstRect, secondRect); secondGameObject.nudgeToRightOfIntersection(intersection); firstGameObject.nudgeToLeftOfIntersection(intersection); } } } } } I think my collision detection code is pretty close, but obviously I'm doing something a little wrong. I really think it's to do with the way my jumps are checked (I wanted to make sure that a jump could happen from an angle (instead of if the falling player had been at a right angle to the player below). Can someone please help me here? I haven't been able to find many resources on how to do this properly (and thinking like a game developer is new for me). Thanks in advance!

    Read the article

  • How to parse amadeus air ticket file

    - by Andrus
    Amadeous produces AIR file like below for every flyight reservation. I need to read reservation number and source and destionation airports from this file. I searched goog for "amadeous air format" but havent found format description. Wikipedia entry about EDIFACt is a bit different, it does not describe this content. Where to fnd information about the file structure ? How to parse this file ? I have not idea about the file stucture, does it contain records like sql table or is it some reservation protocol instructions like postscript file ? Application should work in Microsoft Windows and preferably in Visual FoxPro or C# language. FoxPro or Microsoft Visual Studio 2012 Express can used as programming environment Google returns only Amadeus users guides and tutorials like in comment and in http://www.amadeusschweiz.com/en/documentation/usermanuals.html Those are user manuals. Most promising looks Amadeus Air user guide from this: File which I received name was air.txt and first token in file is AIR-BLK206 Maybe BLK206 is some booking format descriptor. Google returns some documens like my using this so it looks like it is commonly used. This file probably describes how to reserve ticket, which produces air.txt file. I seacrched this and ticket user guide for BLK but those do not contains this abbreviation. Commands in user manual look different than those from this file. How to use this information to extract reservation number and destination airport from this file ? I havent found format description using google. There are amadeus user guides, tutorials ja quick reference files similar which you posted but I do'nt understand how to use them to parse this file. One message describes that this is form of EDIFACT. However EDIFACT message sample in Wikipedia is also diffrerent. I need to create quick prototype to customer which shows that we vcan read those files. Maybe there are some programs which can used to display it in human readable form ?

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

  • How do I make Master/Detail subreports in ReportBuilder come out right?

    - by Mason Wheeler
    I've got a report in ReportBuilder that's supposed to report on two objects. I didn't create this report, and I can't ask the person who did about how it works. Before running the report, we have some code that goes through, finds all the properties on the objects, and loads them into a memory dataset that looks like this: OBJECT_ID: TStringField PROP_NAME: TStringField PROP_VALUE: TStringField The report engine then creates a line on the report for each property in this dataset. This is implemented in a sub-report, whose parent only contains an OBJECT_ID, which is a human-readable name. Everything was going great until we had to display a "comment" of arbitrary size in the report. I made a second sub-report with a TMemoField so it could hold the text, and set the report up in the report designer. What I expect when I run the report is something that looks like this: HEADER Object 1 properties Object 1 comment Object 2 properties Object 2 comment I've managed to get just about everything but that. I used the MasterDataPipeline and MasterFieldLinks properties of the sub-report's pipelines to try to link the OBJECT_IDs of the sub-reports to the OBJECT_ID of the header, and that's the closest I've managed to come, but now what I see is: HEADER Object 1 properties Object 1 comment Object 2 comment The "Object 2 properties" section is nowhere to be seen, even though I've manually verified that the data is making it into the dataset correctly. This is driving me nuts. Any ReportBuilder gurus out there know what's going on and now to fix it?

    Read the article

  • Issue on file existence in C

    - by darkie15
    Hi All, Here is my code which checks if the file exists : #include<stdio.h> #include<zlib.h> #include<unistd.h> #include<string.h> int main(int argc, char *argv[]) { char *path=NULL; FILE *file = NULL; char *fileSeparator = "/"; size_t size=100; int index ; printf("\nArgument count is = %d", argc); if (argc <= 1) { printf("\nUsage: ./output filename1 filename2 ..."); printf("\n The program will display human readable information about the PNG file provided"); } else if (argc > 1) { for (index = 1; index < argc;index++) { path = getcwd(path, size); strcat(path, fileSeparator); printf("\n File name entered is = %s", argv[index]); strcat(path,argv[index]); printf("\n The complete path of the file name is = %s", path); if (access(path, F_OK) != -1) { printf("File does exist"); } else { printf("File does not exist"); } path=NULL; } } return 0; } On running the command ./output test.txt test2.txt The output is: $ ./output test.txt test2.txt Argument count is = 3 File name entered is = test.txt The complete path of the file name is = /home/welcomeuser/test.txt File does not exist File name entered is = test2.txt The complete path of the file name is = /home/welcomeuser/test2.txt File does not exist Now test.txt does exist on the system: $ ls assignment.c output.exe output.exe.stackdump test.txt and yet test.txt is shown as a file not existing. Please help me understand the issue here. Also, please feel free to post any suggestions to improve the code/avoid a bug. Regards, darkie

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • newline-ignoring diff / diff across multiple lines / reflow-ignoring diff

    - by Adam
    Does anybody know of a diff-like tool that can show me the changes between two text files, but ignore changes in whitespace including newlines? Here's an example: the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. the quick brown fox jumped over the lazy bear. All I did was delete one word and reflow it, but "diff -b" detects a change on every line (as it should; I'm not saying this is a bug in diff). But for large LaTeX files this is a major problem; change one word in a long paragraph and the diff you get back is basically useless. By the way, I'm aware that this requires way more computational power than the usual lines-are-atomic diff. I'm only doing this on small human-generated files and am happy to wait a long time if I have to.

    Read the article

  • How to translate CCSID 65535 in SQuirrel from a DB2 on an iseries

    - by ZS6JCE
    I am new to SQuirrel SQL. I need some help to translating CCSID 65535 into ASCII, UNICODE (or anything human readable) I am using the JDBC driver per the following guide. According to IBM's website: What character conversion issues must my program deal with? The IBM i database uses EBCDIC to store text. Java uses Unicode. The JDBC driver handles all conversion between character sets, so your program should not have to worry about it. but I think they refer to CCSID 37 and not 65535(Hex). I have got the following info, from my DB2 DB Doing DSPFD gives me: Coded character set identifier . . . . . . : CCSID 65535 Doing DSPFFD gives me: TXT CHAR 3 3 41 Both Text Field text . . . . . . . . . . . . . . . : Text Coded Character Set Identifier . . . . . : 65535 But the SQuirrel query result for the TXT field is: 5c c1 c4 c4 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 c1 40 7e 40 c2 40 4e 40 c3 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 Which should be translated to something like: *ADD A = B + C

    Read the article

  • How to access/use custom attribute in spring security based CAS client

    - by Bill Li
    I need send certain attributes(say, human readable user name) from server to client after a successful authentication. Server part was done. Now attribute was sent to client. From log, I can see: 2010-03-28 23:48:56,669 DEBUG Cas20ServiceTicketValidator:185 - Server response: [email protected] <cas:proxyGrantingTicket>PGTIOU-1-QZgcN61oAZcunsC9aKxj-cas</cas:proxyGrantingTicket> <cas:attributes> <cas:FullName>Test account 1</cas:FullName> </cas:attributes> </cas:authenticationSuccess> </cas:serviceResponse> Yet, I don't know how to access the attribute in client(I am using Spring security 2.0.5). In authenticationProvider, a userDetailsService is configured to read db for authenticated principal. <bean id="casAuthenticationProvider" class="org.springframework.security.providers.cas.CasAuthenticationProvider"> <sec:custom-authentication-provider /> <property name="userDetailsService" ref="clerkManager"/> <!-- other stuff goes here --> </bean> Now in my controller, I can easily do this: Clerk currentClerk = (Clerk)SecurityContextHolder.getContext().getAuthentication().getPrincipal(); Ideally, I can fill the attribute to this Clerk object as another property in some way. How to do this? Or what is recommended approach to share attributes across all apps under CAS's centralized nature?

    Read the article

  • iphone sdk - UITableView - cannot assign a table to the table view

    - by kossibox
    hello, this is a part of my code. My application crashes when i try to load the view including the uitableview. i think there's a problem with the table i'm tryin to use but can't find it. help please gameTimingTable=[NSArray arrayWithObjects:@"2min + 10sec/coup",@"1min + 15sec/coup",@"5min",nil]; declared in .h as NSArray *gameTimingTable; this is the code i'm using to assign the table to the uitableview - (void)viewDidLoad { gameTimingTable=[NSArray arrayWithObjects:@"2min + 10sec/coup",@"1min + 15sec/coup",@"5min",nil]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // There is only one section. return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of time zone names. return [gameTimingTable count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"MyIdentifier"; // Try to retrieve from the table view a now-unused cell with the given identifier. UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; // If no cell is available, create a new one using the given identifier. if (cell == nil) { // Use the default cell style. cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:MyIdentifier] autorelease]; } // Set up the cell. NSString *cadence = [gameTimingTable objectAtIndex:indexPath.row]; cell.textLabel.text = cadence; return cell; } /* To conform to Human Interface Guildelines, since selecting a row would have no effect (such as navigation), make sure that rows cannot be selected. */ - (NSIndexPath *)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { return nil; } thanks a lot

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • Where can I find free and open data?

    - by kitsune
    Sooner or later, coders will feel the need to have access to "open data" in one of their projects, from knowing a city's zip to a more obscure information such as the axial tilt of Pluto. I know data.un.org which offers access to the UN's extensive array of databases that deal with human development and other socio-economic issues. The other usual suspects are NASA and the USGS for planetary data. There's an article at readwriteweb with more links. infochimps.org seems to stand out. Personally, I need to find historic commodity prices, stock values and other financial data. All these data sets seem to cost money however. Clarification To clarify, I'm interested in all kinds of open data, because sooner or later, I know I will be in a situation where I could need it. I will try to edit this answer and include the suggestions in a structured manners. A link for financial data was hidden in that readwriteweb article, doh! It's called opentick.com. Looks good so far! Update I stumbled over semantic data in another question of mine on here. There is opencyc ('the world's largest and most complete general knowledge base and commonsense reasoning engine'). A project called UMBEL provides a light-weight, distilled version of opencyc. Umbel has semantic data in rdf/owl/skos n3 syntax. The Worldbank also released a very nice API. It offers data from the last 50 years for about 200 countries

    Read the article

  • JSON and Microformats

    - by Tauren
    I'm looking for opinions on whether microformats should be used to name JSON elements. For instance, there is a microformat for physical addresses, that looks like this: <div class="adr"> <div class="street-address">665 3rd St.</div> <div class="extended-address">Suite 207</div> <span class="locality">San Francisco</span>, <span class="region">CA</span> <span class="postal-code">94107</span> <div class="country-name">U.S.A.</div> </div> There is a document available on using JSON and Microformats. The information above could be represented as JSON data like this: "adr": { "street-address":"665 3rd St.", "extended-address":"Suite 207", "locality":"San Fransicso", "region":"CA", "postal-code":"94107", "country-name":"U.S.A." }, The issue I have with this is that I'd like my JSON data to be as lightweight as possible, but still human readable. While still supporting international addresses, I would prefer something like this: "address": { "street":"665 3rd St.", "extended":"Suite 207", "locality":"San Fransicso", "region":"CA", "code":"94107", "country":"U.S.A." }, If I'm designing a new JSON API right now, does it make sense to use microformats from the start? Or should I not really worry about it? Is there some other standard that is more specific to JSON that I should look at?

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • Winform/Program and how to write class 1, class 2, class 3, class 4 in array to linklabels?!!?

    - by JB
    So my program works like this: using winforms, user enters ID number, using an array, based on the right id number, that student information and class schedule outputs in a message box! My question is how to take the 4 classes in the message box/array and write them to the linklabel text in form 2???? My Getschedule class contains the array and is listed below: namespace Eagle_Eye_Class_Finder { public class GetSchedule { IDnumber[] IDnumbers = new IDnumber[3]; public string GetDataFromNumber(string ID) { foreach (IDnumber IDCandidateMatch in IDnumbers) { if (IDCandidateMatch.ID == ID) { StringBuilder myData = new StringBuilder(); myData.AppendLine(IDCandidateMatch.Name); myData.AppendLine(": "); myData.AppendLine(IDCandidateMatch.ID); myData.AppendLine(IDCandidateMatch.year); myData.AppendLine(IDCandidateMatch.class1); myData.AppendLine(IDCandidateMatch.class2); myData.AppendLine(IDCandidateMatch.class3); myData.AppendLine(IDCandidateMatch.class4); //return myData; return myData.ToString(); } } return ""; } public GetSchedule() { IDnumbers[0] = new IDnumber() { Name = "Joshua Banks", ID = "900456317", year = "Senior", class1 = "TEET 4090", class2 = "TEET 3020", class3 = "TEET 3090", class4 = "TEET 4290" }; IDnumbers[1] = new IDnumber() { Name = "Sean Ward", ID = "900456318", year = "Junior", class1 = "ENGNR 4090", class2 = "ENGNR 3020", class3 = "ENGNR 3090", class4 = "ENGNR 4290" }; IDnumbers[2] = new IDnumber() { Name = "Terrell Johnson", ID = "900456319", year = "Sophomore", class1 = "BUS 4090", class2 = "BUS 3020", class3 = "BUS 3090", class4 = "BUS 4290" }; } public class IDnumber { public string Name { get; set; } public string ID { get; set; } public string year { get; set; } public string class1 { get; set; } public string class2 { get; set; } public string class3 { get; set; } public string class4 { get; set; } public static void ProcessNumber(IDnumber myNum) { StringBuilder myData = new StringBuilder(); myData.AppendLine(myNum.Name); myData.AppendLine(": "); myData.AppendLine(myNum.ID); myData.AppendLine(myNum.year); myData.AppendLine(myNum.class1); myData.AppendLine(myNum.class2); myData.AppendLine(myNum.class3); myData.AppendLine(myNum.class4); MessageBox.Show(myData.ToString()); } } } } My form 2 which will contain the linklabels is listed below: public class YOURCLASSSCHEDULE : System.Windows.Forms.Form { public System.Windows.Forms.LinkLabel linkLabel1; public System.Windows.Forms.LinkLabel linkLabel2; public System.Windows.Forms.LinkLabel linkLabel3; public System.Windows.Forms.LinkLabel linkLabel4; private Button button1; /// Required designer variable. public System.ComponentModel.Container components = null; public YOURCLASSSCHEDULE() { // InitializeComponent(); // TODO: Add any constructor code after InitializeComponent call } /// Clean up any resources being used. protected override void Dispose(bool disposing) { if (disposing) { if (components != null) { components.Dispose(); } } base.Dispose(disposing); } #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { System.ComponentModel.ComponentResourceManager resources = new System.ComponentModel.ComponentResourceManager(typeof(YOURCLASSSCHEDULE)); this.linkLabel1 = new System.Windows.Forms.LinkLabel(); this.linkLabel2 = new System.Windows.Forms.LinkLabel(); this.linkLabel3 = new System.Windows.Forms.LinkLabel(); this.linkLabel4 = new System.Windows.Forms.LinkLabel(); this.button1 = new System.Windows.Forms.Button(); this.SuspendLayout(); // // linkLabel1 // this.linkLabel1.BackColor = System.Drawing.SystemColors.ActiveCaption; this.linkLabel1.BorderStyle = System.Windows.Forms.BorderStyle.Fixed3D; this.linkLabel1.Font = new System.Drawing.Font("Times New Roman", 14.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.linkLabel1.LinkArea = new System.Windows.Forms.LinkArea(0, 7); this.linkLabel1.LinkBehavior = System.Windows.Forms.LinkBehavior.HoverUnderline; this.linkLabel1.Location = new System.Drawing.Point(41, 123); this.linkLabel1.Name = "linkLabel1"; this.linkLabel1.Size = new System.Drawing.Size(288, 32); this.linkLabel1.TabIndex = 1; this.linkLabel1.TabStop = true; this.linkLabel1.Text = "Class 1"; this.linkLabel1.TextAlign = System.Drawing.ContentAlignment.MiddleCenter; this.linkLabel1.LinkClicked += new System.Windows.Forms.LinkLabelLinkClickedEventHandler(this.linkLabel1_LinkClicked); // // linkLabel2 // this.linkLabel2.BackColor = System.Drawing.SystemColors.ActiveCaption; this.linkLabel2.BorderStyle = System.Windows.Forms.BorderStyle.Fixed3D; this.linkLabel2.Font = new System.Drawing.Font("Times New Roman", 14.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.linkLabel2.LinkBehavior = System.Windows.Forms.LinkBehavior.HoverUnderline; this.linkLabel2.Location = new System.Drawing.Point(467, 123); this.linkLabel2.Name = "linkLabel2"; this.linkLabel2.Size = new System.Drawing.Size(288, 32); this.linkLabel2.TabIndex = 2; this.linkLabel2.TabStop = true; this.linkLabel2.Text = "Class 2"; this.linkLabel2.TextAlign = System.Drawing.ContentAlignment.MiddleCenter; this.linkLabel2.VisitedLinkColor = System.Drawing.Color.Navy; this.linkLabel2.LinkClicked += new System.Windows.Forms.LinkLabelLinkClickedEventHandler(this.linkLabel2_LinkClicked); // // linkLabel3 // this.linkLabel3.BackColor = System.Drawing.SystemColors.ActiveCaption; this.linkLabel3.BorderStyle = System.Windows.Forms.BorderStyle.Fixed3D; this.linkLabel3.Font = new System.Drawing.Font("Times New Roman", 14.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.linkLabel3.LinkBehavior = System.Windows.Forms.LinkBehavior.HoverUnderline; this.linkLabel3.Location = new System.Drawing.Point(41, 311); this.linkLabel3.Name = "linkLabel3"; this.linkLabel3.Size = new System.Drawing.Size(288, 32); this.linkLabel3.TabIndex = 3; this.linkLabel3.TabStop = true; this.linkLabel3.Text = "Class 3"; this.linkLabel3.TextAlign = System.Drawing.ContentAlignment.MiddleCenter; this.linkLabel3.LinkClicked += new System.Windows.Forms.LinkLabelLinkClickedEventHandler(this.linkLabel3_LinkClicked); // // linkLabel4 // this.linkLabel4.BackColor = System.Drawing.SystemColors.ActiveCaption; this.linkLabel4.BorderStyle = System.Windows.Forms.BorderStyle.Fixed3D; this.linkLabel4.Font = new System.Drawing.Font("Times New Roman", 14.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.linkLabel4.LinkBehavior = System.Windows.Forms.LinkBehavior.HoverUnderline; this.linkLabel4.Location = new System.Drawing.Point(467, 311); this.linkLabel4.Name = "linkLabel4"; this.linkLabel4.Size = new System.Drawing.Size(288, 32); this.linkLabel4.TabIndex = 4; this.linkLabel4.TabStop = true; this.linkLabel4.Text = "Class 4"; this.linkLabel4.TextAlign = System.Drawing.ContentAlignment.MiddleCenter; this.linkLabel4.LinkClicked += new System.Windows.Forms.LinkLabelLinkClickedEventHandler(this.linkLabel4_LinkClicked); // // this.AutoScaleBaseSize = new System.Drawing.Size(6, 15); this.BackgroundImage = ((System.Drawing.Image)(resources.GetObject("$this.BackgroundImage"))); this.BackgroundImageLayout = System.Windows.Forms.ImageLayout.Stretch; this.ClientSize = new System.Drawing.Size(790, 482); this.Controls.Add(this.button1); this.Controls.Add(this.linkLabel4); this.Controls.Add(this.linkLabel3); this.Controls.Add(this.linkLabel2); this.Controls.Add(this.linkLabel1); this.Font = new System.Drawing.Font("OldDreadfulNo7 BT", 8.25F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((byte)(0))); this.Name = "YOURCLASSSCHEDULE"; this.Text = "Your Classes"; this.Load += new System.EventHandler(this.Form2_Load); this.ResumeLayout(false); } #endregion public void Form2_Load(object sender, System.EventArgs e) { // if (text == "900456317") // { //} } public void linkLabel1_LinkClicked(object sender, System.Windows.Forms.LinkLabelLinkClickedEventArgs e) { System.Diagnostics.Process.Start("http://www.georgiasouthern.edu/map/"); } private void linkLabel2_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e) { } private void linkLabel3_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e) { } private void linkLabel4_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e) { } private void button1_Click(object sender, EventArgs e) { Form1 form1 = new Form1(); form1.Show(); this.Hide(); } } }

    Read the article

  • Video-codec rater by image comparison algorithm?

    - by Andreas Hornig
    Hi, perhaps anyone knows if this is possible. comparing image quality is almost imposible to describe without subjective influences. When someone rates an image quality as good there is at least one person, that doesn't think so. human preferences are always different. So, I would like to know if there is away to "rate" the image quality by an algorithm that compares the original image to the produced one in following issues colour change(difference pixel by pixel blur rate artifacts and macroblocking the first one would be the easiest one because you could check just the diffeence in colours and can give 3 values in +- of each hex-value both last once I don't know if this is possible, but the blocking could be detected by edge-finding. and the king's quest would be to do that for more then just one image, because video is done with several frames. perhaps you expert programmers could tell me, if such an automated algo can be done to bring some objective measurement divice into rating image quality. this could perhaps calm down some h.264 is better than x264 and better than vp8 and blaaah people :) Andreas 1st posted here http://www.hdtvtotal.com/index.php?name=PNphpBB2&file=viewtopic&p=9705

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >