Search Results

Search found 3951 results on 159 pages for 'ken le'.

Page 153/159 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • Oracle User Communities and Enterprise Manager

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Contributed by Joe Dimmer, Senior Business Development Manager, Oracle Enterprise Manager Heightened interest and adoption of Oracle Enterprise Manager has led to keen interest in “manageability” within the user group community.  In response, user groups are equipping their membership with the right tools for implementation and use manageability through education opportunities and Special Interest Groups.  Manageability is increasingly viewed not only as a means to enable the Oracle environment to become a competitive business advantage for organizations, but also as a means to advance the individual careers of those who embrace enterprise management.  Two Oracle user groups – the Independent Oracle User Group (IOUG) and the United Kingdom Oracle User Group (UKOUG) – each have Special Interest Groups where manageability is prominently featured.  There are also efforts underway to establish similarly charted SIGs that will be reported in future blogs.  The good news is, there’s a lot of news! First off, the IOUG will be hosting a Summer Series of live webcasts:  “Configuring and Managing a Private Cloud with Enterprise Manager 12c” by Kai Yu of Dell, Inc.              Wednesday, June 20th from Noon – 1 PM CDT , Click here for details & registration “What is User Experience Monitoring and What is Not? A case study of Oracle Global IT’s implementation of Enterprise Manager 12c and RUEI” by Eric Tran Le of Oracle            Wednesday, July 18th from Noon – 1 PM CDT , Click here for details & registration “Shed some light on the ‘bumps in the night’ with Enterprise Manager 12c” by David Start of Johnson Controls            Wednesday, August 22nd from Noon – 1 PM CDT, Click here for details & registration   In addition, the UKOUG Availability and Infrastructure Management (AIM) SIG is hosting its next meeting on Tuesday, July 3rd at the Met in Leeds where EM 12c Cloud Management will be presented.  Click here for details & registration.  In future posts from Joe, look for news related to the following: ·         IOUG Community Page and Newsletter devoted to manageability ·         Full day of manageability featured during Oracle OpenWorld 2012 “SIG Sunday” ·         Happenings from other regional User Groups that feature manageability Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • La búsqueda de la eficiencia como Santo Grial de las TIC sanitarias

    - by Eloy M. Rodríguez
    Las XVIII Jornadas de Informática Sanitaria en Andalucía se han cerrado el pasado viernes con 11.500 horas de inteligencia colectiva. Aunque el cálculo supongo que resulta de multiplicar las horas de sesiones y talleres por el número de inscritos, lo que no sería del todo real ya que la asistencia media calculo que andaría por las noventa personas, supongo que refleja el global si incluimos el montante de interacciones informales que el formato y lugar de celebración favorecen. Mi resumen subjetivo es que todos somos conscientes de que debemos conseguir más eficiencia en y gracias a las TIC y que para ello hemos señalado algunas pautas, que los asistentes, en sus diferentes roles debiéramos aplicar y ayudar a difundir. En esa línea creo que destaca la necesidad de tener muy claro de dónde se parte y qué se quiere conseguir, para lo que es imprescindible medir y que las medidas ayuden a retroalimentar al sistema en orden de conseguir sus objetivos. Y en este sentido, a nivel anecdótico, quisiera dejar una paradoja que se presentó sobre la eficiencia: partiendo de que el coste/día de hospitalización es mayor al principio que los últimos días de la estancia, si se consigue ser más eficiente y reducir la estancia media, se liberarán últimos días de estancia que se utilizarán para nuevos ingresos, lo que hará que el número de primeros días de estancia aumente el coste económico total. En este caso mejoraríamos el servicio a los ciudadanos pero aumentaríamos el coste, salvo que se tomasen acciones para redimensionar la oferta hospitalaria bajando el coste y sin mejorer la calidad. También fue tema destacado la posibilidad/necesidad de aprovechar las capacidades de las TIC para realizar cambios estructurales y hacer que la medicina pase de ser reactiva a proactiva mediante alarmas que facilitasen que se actuase antes de ocurra el problema grave. Otro tema que se trató fue la necesidad real de corresponsabilizar de verdad al ciudadano, gracias a las enormes posibilidades a bajo coste que ofrecen las TIC, asumiendo un proceso hacia la salud colaborativa que tiene muchos retos por delante pero también muchas más oportunidades. Y la carpeta del ciudadano, emergente en varios proyectos e ideas, es un paso en ese aspecto. Un tema que levantó pasiones fue cuando la Directora Gerente del Sergas se quejó de que los proyectos TIC eran lentísimos. Desgraciadamente su agenda no le permitió quedarse al debate que fue bastante intenso en el que salieron temas como el larguísimo proceso administrativo, las especificaciones cambiantes, los diseños a medida, etc como factores más allá de la eficiencia especifica de los profesionales TIC involucrados en los proyectos. Y por último quiero citar un tema muy interesante en línea con lo hablado en las jornadas sobre la necesidad de medir: el Índice SEIS. La idea es definir una serie de criterios agrupados en grandes líneas y con un desglose fino que monitorice la aportación de las TIC en la mejora de la salud y la sanidad. Nos presentaron unas versiones previas con debate aún abierto entre dos grandes enfoques, partiendo desde los grandes objetivos hasta los procesos o partiendo desde los procesos hasta los objetivos. La discusión no es sólo académica, ya que influye en los parámetros a establecer. La buena noticia es que está bastante avanzado el trabajo y que pronto los servicios de salud podrán tener una herramienta de comparación basada en la realidad nacional. Para los interesados, varios asistentes hemos ido tuiteando las jornadas, por lo que el que quiera conocer un poco más detalles puede ir a Twitter y buscar la etiqueta #jisa18 y empezando del más antiguo al más moderno se puede hacer un seguimiento con puntos de vista subjetivos sobre lo allí ocurrido. No puedo dejar de hacer un par de autocríticas, ya que soy miembro de la SEIS. La primera es sobre el portal de la SEIS que no ha tenido la interactividad que unas jornadas como estas necesitaban. Pronto empezará a tener documentos y análisis de lo allí ocurrido y luego vendrán las crónicas y análisis más cocinados en la revista I+S. Pero en la segunda década del siglo XXI se necesita bastante más. La otra es sobre la no deseada poca presencia de usuarios de las TIC sanitarias en los roles de profesionales sanitarios y ciudadanos usuarios de los sistemas de información sanitarios. Tenemos que ser proactivos para que acudan en número significativo, ya que si no estamos en riesgo de ser unos TIC-sanitarios absolutistas: todo para los usuarios pero sin los usuarios. Tweet

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • Warehouse Management per Endeca: disponibili i video su Youtube

    - by Claudia Caramelli-Oracle
    12.00 Il team di gestione del prodotto WMS ha registrato quattro video sulle estensioni Warehouse Management per Endeca – il programma che gestisce in tempo reale le operazioni di magazzino. Quasi un'ora di contenuti che copre: Introduzione alle estensioni WMS per Endeca Plan and Track Fulfillment Space Utilization Labor Utilization Tutti e quattro i video possono essere trovati cliccando qui. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Why I Love the Social Management Platform I Use

    - by Mike Stiles
    Not long ago, I asked the product heads for the various components of the Oracle Social Cloud’s SRM to say what they thought was coolest about their component. And while they did a fine job, it was recently pointed out to me that no one around here uses the platform in a real-world setting more than I do, as I not only blog and podcast my brains out, I also run Oracle Social’s own social properties. Of course I’m pro-Oracle Social’s product. Duh. But if you can get around immediately writing this off as a puff piece, there are real reasons beyond my employment that the Oracle SRM works for me as a community manager. If it didn’t, I could have simply written about something else, like how people love smartphones or something genius like that. Post Grid I like seeing what I want to see. I’m difficult that way. Post grid lets me see all posts for all channels, with custom columns showing me how posts are doing. I can filter the grid by social channel, published, scheduled, draft, suggested, etc. Then there’s a pullout side panel that shows me post details, including engagement analytics. From the pullout, I can preview the post, do a quick edit, a full edit, or (my favorite) copy a post so I can edit it and schedule it for other times so I don’t have to repeat from scratch. I’m not lazy, just time conscious. The Post Creation Environment Given our post volume, I need this to be as easy as it can be. I can highlight which streams I want the post to go out on, edit for the individual streams, maintain a media library that’s easy to upload to and attach from, tag posts, insert links that auto-shorten to an orac.le shortlink, schedule with a nice calendar visual, geo-target, drop photos inline into Twitter, and review each post. Watching My Channels The Engage component of the Oracle SRM brings in and drops into a grid the activity that’s happening on all my channels. I keep this open round-the-clock. Again, I get to see only what I want; social network, stream, unread messages, engagement by how I labeled them, and date range. I can bring up a post with a click, reply, label it, retweet it, assign it, delete it, archive it, etc. So don’t bother trying to be a troll on my channels. Analytics Social publishing and engaging 24/7 would be pretty unrewarding if I couldn’t see how our audience was responding. Frankly, I get more analytics than I know what to do with (I’m a content creator, not a data analyst). But I do know what numbers I care about, and they’re available by channel, date range, and campaigns. I’m seeing fan count, sources and demographics. I’m seeing engagement, what kinds of posts are getting engagement, and top engagers. I’m seeing my reach, both organic and paid. I’m seeing how individual posts performed in terms of engagement and virality, and posting time/date insight. Have I covered all the value propositions? I’ve covered pathetically few of them. It would be impossible in blog length to give shout-outs to the vast number of features and functionalities. From organizing teams and managing permissions with Workflow to the powerful ability to monitor topics (and your competition) across the web in Listen, it’s a major, and increasingly necessary, weapon in your social marketing arsenal. The life of a Community Manager is not for everybody. So if the Oracle SRM can actually make a Community Manager’s life easier, what’s not to love? I invite you to take a look at and participate in our Oracle Social Cloud social channels! Facebook Twitter YouTube Google Plus LinkedIn Daily Podcast on iHeartRadio @mikestiles @oraclesocial Photo: freeimages.com

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • C#, Delegates and LINQ

    - by JustinGreenwood
    One of the topics many junior programmers struggle with is delegates. And today, anonymous delegates and lambda expressions are profuse in .net APIs.  To help some VB programmers adapt to C# and the many equivalent flavors of delegates, I walked through some simple samples to show them the different flavors of delegates. using System; using System.Collections.Generic; using System.Linq; namespace DelegateExample { class Program { public delegate string ProcessStringDelegate(string data); public static string ReverseStringStaticMethod(string data) { return new String(data.Reverse().ToArray()); } static void Main(string[] args) { var stringDelegates = new List<ProcessStringDelegate> { //========================================================== // Declare a new delegate instance and pass the name of the method in new ProcessStringDelegate(ReverseStringStaticMethod), //========================================================== // A shortcut is to just and pass the name of the method in ReverseStringStaticMethod, //========================================================== // You can create an anonymous delegate also delegate (string inputString) //Scramble { var outString = inputString; if (!string.IsNullOrWhiteSpace(inputString)) { var rand = new Random(); var chs = inputString.ToCharArray(); for (int i = 0; i < inputString.Length * 3; i++) { int x = rand.Next(chs.Length), y = rand.Next(chs.Length); char c = chs[x]; chs[x] = chs[y]; chs[y] = c; } outString = new string(chs); } return outString; }, //========================================================== // yet another syntax would be the lambda expression syntax inputString => { // ROT13 var array = inputString.ToCharArray(); for (int i = 0; i < array.Length; i++) { int n = (int)array[i]; n += (n >= 'a' && n <= 'z') ? ((n > 'm') ? 13 : -13) : ((n >= 'A' && n <= 'Z') ? ((n > 'M') ? 13 : -13) : 0); array[i] = (char)n; } return new string(array); } //========================================================== }; // Display the results of the delegate calls var stringToTransform = "Welcome to the jungle!"; System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("String to Process: "); System.Console.ForegroundColor = ConsoleColor.Yellow; System.Console.WriteLine(stringToTransform); stringDelegates.ForEach(delegatePointer => { System.Console.WriteLine(); System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("Delegate Method Name: "); System.Console.ForegroundColor = ConsoleColor.Magenta; System.Console.WriteLine(delegatePointer.Method.Name); System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("Delegate Result: "); System.Console.ForegroundColor = ConsoleColor.White; System.Console.WriteLine(delegatePointer(stringToTransform)); }); System.Console.ReadKey(); } } } The output of the program is below: String to Process: Welcome to the jungle! Delegate Method Name: ReverseStringStaticMethod Delegate Result: !elgnuj eht ot emocleW Delegate Method Name: ReverseStringStaticMethod Delegate Result: !elgnuj eht ot emocleW Delegate Method Name: b__1 Delegate Result: cg ljotWotem!le une eh Delegate Method Name: b__2 Delegate Result: dX_V|`X ?| ?[X ]?{Z_X!

    Read the article

  • Project Time Tracker

    - by Geertjan
    Based on yesterday's blog entry, let's do something semi useful and display, in the project popup, which is available when you right-click a project in the Projects window, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.List; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.awt.StatusDisplayer; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Lookup; import org.openide.util.LookupEvent; import org.openide.util.LookupListener; import org.openide.util.Utilities; import org.openide.util.WeakListeners; @ActionID( category = "Demo", id = "org.ptt.TrackProjectSelectionAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectSelectionAction extends AbstractAction implements LookupListener, FileChangeListener { private Lookup.Result<Project> projects; private Project context; private Long startTime; private Long changedTime; private DateFormat formatter; private List<Project> timedProjects; public TrackProjectSelectionAction() { putValue("popupText", "Timer"); formatter = new SimpleDateFormat("HH:mm:ss"); timedProjects = new ArrayList<Project>(); projects = Utilities.actionsGlobalContext().lookupResult(Project.class); projects.addLookupListener( WeakListeners.create(LookupListener.class, this, projects)); resultChanged(new LookupEvent(projects)); } @Override public void resultChanged(LookupEvent le) { Collection<? extends Project> allProjects = projects.allInstances(); if (allProjects.size() == 1) { Project currentProject = allProjects.iterator().next(); if (!timedProjects.contains(currentProject)) { String currentProjectName = ProjectUtils.getInformation(currentProject).getDisplayName(); putValue("popupText", "Start Timer for Project: " + currentProjectName); StatusDisplayer.getDefault().setStatusText( "Current Project: " + currentProjectName); timedProjects.add(currentProject); context = currentProject; } } } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} } Some more work needs to be done to complete the above, i.e., for each project you somehow need to maintain the start time and last change and redisplay that whenever the user right-clicks the project.

    Read the article

  • How to parse nagios status.dat file?

    - by daniels
    I'd like to parse status.dat file for nagios3 and output as xml with a python script. The xml part is the easy one but how do I go about parsing the file? Use multi line regex? It's possible the file will be large as many hosts and services are monitored, will loading the whole file in memory be wise? I only need to extract services that have critical state and host they belong to. Any help and pointing in the right direction will be highly appreciated. LE Here's how the file looks: ######################################## # NAGIOS STATUS FILE # # THIS FILE IS AUTOMATICALLY GENERATED # BY NAGIOS. DO NOT MODIFY THIS FILE! ######################################## info { created=1233491098 version=2.11 } program { modified_host_attributes=0 modified_service_attributes=0 nagios_pid=15015 daemon_mode=1 program_start=1233490393 last_command_check=0 last_log_rotation=0 enable_notifications=1 active_service_checks_enabled=1 passive_service_checks_enabled=1 active_host_checks_enabled=1 passive_host_checks_enabled=1 enable_event_handlers=1 obsess_over_services=0 obsess_over_hosts=0 check_service_freshness=1 check_host_freshness=0 enable_flap_detection=0 enable_failure_prediction=1 process_performance_data=0 global_host_event_handler= global_service_event_handler= total_external_command_buffer_slots=4096 used_external_command_buffer_slots=0 high_external_command_buffer_slots=0 total_check_result_buffer_slots=4096 used_check_result_buffer_slots=0 high_check_result_buffer_slots=2 } host { host_name=localhost modified_attributes=0 check_command=check-host-alive event_handler= has_been_checked=1 should_be_scheduled=0 check_execution_time=0.019 check_latency=0.000 check_type=0 current_state=0 last_hard_state=0 plugin_output=PING OK - Packet loss = 0%, RTA = 3.57 ms performance_data= last_check=1233490883 next_check=0 current_attempt=1 max_attempts=10 state_type=1 last_state_change=1233489475 last_hard_state_change=1233489475 last_time_up=1233490883 last_time_down=0 last_time_unreachable=0 last_notification=0 next_notification=0 no_more_notifications=0 current_notification_number=0 notifications_enabled=1 problem_has_been_acknowledged=0 acknowledgement_type=0 active_checks_enabled=1 passive_checks_enabled=1 event_handler_enabled=1 flap_detection_enabled=1 failure_prediction_enabled=1 process_performance_data=1 obsess_over_host=1 last_update=1233491098 is_flapping=0 percent_state_change=0.00 scheduled_downtime_depth=0 } service { host_name=gateway service_description=PING modified_attributes=0 check_command=check_ping!100.0,20%!500.0,60% event_handler= has_been_checked=1 should_be_scheduled=1 check_execution_time=4.017 check_latency=0.210 check_type=0 current_state=0 last_hard_state=0 current_attempt=1 max_attempts=4 state_type=1 last_state_change=1233489432 last_hard_state_change=1233489432 last_time_ok=1233491078 last_time_warning=0 last_time_unknown=0 last_time_critical=0 plugin_output=PING OK - Packet loss = 0%, RTA = 2.98 ms performance_data= last_check=1233491078 next_check=1233491378 current_notification_number=0 last_notification=0 next_notification=0 no_more_notifications=0 notifications_enabled=1 active_checks_enabled=1 passive_checks_enabled=1 event_handler_enabled=1 problem_has_been_acknowledged=0 acknowledgement_type=0 flap_detection_enabled=1 failure_prediction_enabled=1 process_performance_data=1 obsess_over_service=1 last_update=1233491098 is_flapping=0 percent_state_change=0.00 scheduled_downtime_depth=0 } It can have any number of hosts and a host can have any number of services.

    Read the article

  • Linq IQueryable variables

    - by kevinw
    Hi i have a function that should return me a string but what is is doing is bringing me back the sql expression that i am using on the database what have i done wrong public static IQueryable XMLtoProcess(string strConnection) { Datalayer.HameserveDataContext db = new HameserveDataContext(strConnection); var xml = from x in db.JobImports where x.Processed == false select new { x.Content }; return xml; } this is the code sample this is what i should be getting back <PMZEDITRI TRI_TXNNO="11127" TRI_TXNSEQ="1" TRI_CODE="600" TRI_SUBTYPE="1" TRI_STATUS="Busy" TRI_CRDATE="2008-02-25T00:00:00.0000000-00:00" TRI_CRTIME="54540" TRI_PRTIME="0" TRI_BATCH="" TRI_REF="" TRI_CPY="main" C1="DEPL" C2="007311856/001" C3="14:55" C4="CUB2201" C5="MR WILLIAM HOGG" C6="CS12085393" C7="CS" C8="Blocked drain" C9="Scheme: CIS Home Rescue edi tests" C10="MR WILLIAM HOGG" C11="74 CROMARTY" C12="OUSTON" C13="CHESTER LE STREET" C14="COUNTY DURHAM" C15="" C16="DH2 1JY" C17="" C18="" C19="" C20="" C21="CIS" C22="0018586965 ||" C23="BD" C24="W/DE/BD" C25="EX-DIRECTORY" C26="" C27="/" C28="CIS Home Rescue" C29="CIS Home Rescue Plus Insd" C30="Homeserve Claims Management Ltd|Upon successful completion of this repair the contractor must submit an itemised and costed Homeserve Claims Management Ltd Job Sheet." N1="79.9000" N2="68.0000" N3="11.9000" N4="0" N5="0" N6="0" D1="2008-02-25T00:00:00.0000000-00:00" T2="EX-DIRECTORY" T4="Blocked drain" TRI_SYSID="9" TRI_RETRY="3" TRI_RETRYTIME="0" /> can anyone help me please

    Read the article

  • powershell errors on remove-item command

    - by Lachlan
    Hi, I've written a powershell script which removes folders more than 7 days old. It seems to be removing the folders, but for some reason its giving me errors. I was wondering if anyone might be able to tell me why? The script is... $rootBackupFolder = "\server\share" get-childitem $rootBackupFolder | where {$.PSIsContainer -AND $.Name -match ("^CVSRepository_(19|20)[0-9]0-9(0[1-9]|[12][0-9]|3[01])$") -AND $_.LastWriteTime -le (Get-Date).AddDays(-7)} | remove-item -recurse -force Their are 2 errors I get, this first is this one... (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Remove-Item : Cannot remove item \server\share\CVSRepository_20100305\somesubfolder\sumotherfolder: Could not find a part of the path '\server\share\CVSRepository_20100305\somesubfolder\sumotherfolder'. The other error is this... (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Remove-Item : Cannot remove item \server\share\CVSRepository_2010031 2\somesubfolder\: Access to the path 'Rdl' is denied.At C:\CVSRepository_BackupScript\Backup.ps1:43 char:38+ (Get-Date).AddDays(-7)} | remove-item <<<< -recurse -force Any ideas? The script is running under a domain account which has privs to delete the remote folders. And in fact it is indeed deleting the folders! But giving me errors... Thanks, Lachlan

    Read the article

  • Nested table height in TCPDF

    - by Kuroki Kaze
    Is it possible to make nested table fit height of its parent cell in TCPDF? My code: <?php require_once('tcpdf/config/lang/eng.php'); require_once('tcpdf/tcpdf.php'); $pdf = new TCPDF(PDF_PAGE_ORIENTATION, PDF_UNIT, PDF_PAGE_FORMAT, true, 'UTF-8', false); $pdf->setPrintHeader(false); $pdf->setPrintFooter(false); $pdf->SetFont('times', 'BI', 8); $pdf->AddPage(); $pdf->writeHTML('<table> <tr><td bgcolor="gray"> Angoisse et vif espoir, sans humeur factieuse.<br/> Plus allait se vidant le fatal sablier,<br/> Plus ma torture était âpre et délicieuse;<br/> Tout mon coeur s’arrachait au monde familier</td> <td bgcolor="lightgray">Second</td> <td bgcolor="gray">Third</td> <td> <table style="height: 100%"> <tr bgcolor="blue" style="height: 30%"><td bgcolor="yellow" style="height: 30%">Ichi</td></tr> <tr bgcolor="white" style="height: 30%"><td bgcolor="cyan" style="height: 30%">Ni</td></tr> <tr bgcolor="blue" style="height: 30%"><td bgcolor="yellow" style="height: 30%">San</td></tr> </table> </td></tr> </table>'); $pdf->Output('example_002.pdf', 'I'); ?> I want table in last cell to fill it entirely. Is there any way to do this?

    Read the article

  • How to call Javascript function in JSF EL conditionally?

    - by Paul
    I have to call Javascript funtion based on the bean value. i use the following code onmouseover="#{occasionBean.user.userPreference.defaultPreview==true?'':'Tip()'})" I need to send some parameters in Tip() like this Tip('<img src="pics/image.jpg" width="60">') Error i am getting is javax.servlet.jsp.JspException: javax.faces.el.EvaluationException: com.sun.faces.el.impl.parser.ParseException: Encountered "test" at line 1, column 60. Was expecting one of: "}" ... "." ... "" ... "gt" ... "<" ... "lt" ... "==" ... "eq" ... "<=" ... "le" ... "=" ... "ge" ... "!=" ... "ne" ... "[" ... "+" ... "-" ... "*" ... "/" ... "div" ... "%" ... "mod" ... "and" ... "&&" ... "or" ... "||" ... "?" ... '

    Read the article

  • Accessing bean object methods from xhtml in RichFaces

    - by Mark Lewis
    Hello When I use (1) in my xhtml, I get an error as in (2). How can I access the size of an array in my bean? (1) A List of objects of a custom class type, accessed through the following h:outputText in a rich:column in a rich:subTable in a rich:dataTable: <h:outputText value="Info: #{f.filemask.size()}" /> (2) Caused by: com.sun.facelets.tag.TagAttributeException: /nodeConfig.xhtml @190,91 value="Info: #{f.filemask.size()" Error Parsing: Info:  #{f.filemask.size()} at com.sun.facelets.tag.TagAttribute.getValueExpression(TagAttribute.java:259) ... Caused by: org.apache.el.parser.ParseException: Encountered " "(" "( "" at line 1, column 41. Was expecting one of: "}" ... "." ... "[" ... ">" ... "gt" ... "<" ... "lt" ... ">=" ... "ge" ... "<=" ... "le" ... "==" ... "eq" ... "!=" ... "ne" ... "&&" ... "and" ... "||" ... "or" ... "*" ... "+" ... "-" ... "/" ... "div" ... "%" ... "mod" ... Any help greatly appreciated. I cannot seem to find references to using methods like this but this reference reported it working fine

    Read the article

  • The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common ta

    - by zurna
    I get "The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified." error with the following code. I initially had two tables, ADSAREAS & CATEGORIES. I started receiving this error when I removed CATEGORIES table. Select Case SIDX Case "ID" : SQLCONT1 = " AdsAreasID" Case "Page" : SQLCONT1 = " AdsAreasName" Case Else : SQLCONT1 = " AdsAreasID" End Select Select Case SORD Case "asc" : SQLCONT2 = " ASC" Case "desc" : SQLCONT2 = " DESC" Case Else : SQLCONT2 = " ASC" End Select ''# search feature ---> Select Case SEARCHFIELD Case "ID" : SQLSFIELD = "AND AdsAreasID" Case "Ads Areas" : SQLSFIELD = "AND AdsAreasName" Case Else : SQLSFIELD = "" End Select Select Case SEARCHOPER Case "eq" : SQLSOPER = " = " & SEARCHSTRING Case "ne" : SQLSOPER = " <> " & SEARCHSTRING Case "lt" : SQLSOPER = " <" & SEARCHSTRING Case "le" : SQLSOPER = " <= " & SEARCHSTRING Case "gt" : SQLSOPER = " >" & SEARCHSTRING Case "ge" : SQLSOPER = " >= " & SEARCHSTRING Case "bw" : SQLSOPER = " LIKE '" & SEARCHSTRING & "%' " Case "ew" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "' " Case "cn" : SQLSOPER = " LIKE '%" & SEARCHSTRING & "%' " Case Else : SQLSOPER = "" End Select ''# search feature ---> SQL = "SELECT * FROM ( SELECT A.AdsAreasID, A.AdsAreasName, ROW_NUMBER() OVER (ORDER BY A.AdsAreasID) As Row" SQL = SQL & " FROM ADSAREAS A" SQL = SQL & " WHERE Row > ("& RecordsPageSize - RecordsPerPage &") AND Row <= ("& RecordsPageSize &") ORDER BY" & SQLCONT1 & SQLCONT2 Set objXML = objConn.Execute(SQL)

    Read the article

  • how can i set cookie in curl

    - by Sushant Panigrahi
    i am fetching somesite page.. but it display nothing and url address change. example i have typed http://localhost/sushant/EXAMPLE_ROUGH/curl.php in curl page my coding is= $fp = fopen("cookie.txt", "w"); fclose($fp); $agent= 'Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9) Gecko/2008052906 Firefox/3.0'; $ch = curl_init(); curl_setopt($ch, CURLOPT_USERAGENT, $agent); // 2. set the options, including the url curl_setopt($ch, CURLOPT_URL, "http://www.fnacspectacles.com/place-spectacle/manifestation/Grand-spectacle-LE-ROI-LION-ROI4.htm"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt ($ch, CURLOPT_COOKIEJAR, 'cookie.txt'); curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt"); // 3. execute and fetch the resulting HTML output if(curl_exec($ch) === false) { echo 'Curl error: ' . curl_error($ch); } else echo $output = curl_exec($ch); // 4. free up the curl handle curl_close($ch); ? but it canege url like this.. http://localhost/aide.do?sht=_aide_cookies_ object not found. how can solve these problem help me

    Read the article

  • Percent-Encoded Percent in URI

    - by Lukas
    In our application, it is possible for a user to upload files then download them later. We don't restrict them from having any special characters in the file name. The problem comes in when we create the link for the user to download the file. I use the Java URL encoder to encode the file name that gets put into the href of the link, but I'm still having problems with percent (%) signs. For example, if the user uploads a file named fi%le.jpg, the href that gets generated is fi%25le.jpg, and everything is fine. The problem is when the percent sign is right before the period (i.e., file%.jpg, which gets converted to file%25.jpg). When the user clicks on the link, they get a 404 (Not Found) error. The strange thing is that it is not a problem if the two characters following the percent sign are hex characters.... Weird, eh? Any help is appreciated. I am using Tomcat/Struts. Could the built-in URL decoder have anything to do with this problem?

    Read the article

  • Marshalling a big-endian byte collection into a struct in order to pull out values

    - by Pat
    There is an insightful question about reading a C/C++ data structure in C# from a byte array, but I cannot get the code to work for my collection of big-endian (network byte order) bytes. (EDIT: Note that my real struct has more than just one field.) Is there a way to marshal the bytes into a big-endian version of the structure and then pull out the values in the endianness of the framework (that of the host, which is usually little-endian)? This should summarize what I'm looking for (LE=LittleEndian, BE=BigEndian): void Main() { var leBytes = new byte[] {1, 0}; var beBytes = new byte[] {0, 1}; Foo fooLe = ByteArrayToStructure<Foo>(leBytes); Foo fooBe = ByteArrayToStructureBigEndian<Foo>(beBytes); Assert.AreEqual(fooLe, fooBe); } [StructLayout(LayoutKind.Explicit, Size=2)] public struct Foo { [FieldOffset(0)] public ushort firstUshort; } T ByteArrayToStructure<T>(byte[] bytes) where T: struct { GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); T stuff = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(),typeof(T)); handle.Free(); return stuff; } T ByteArrayToStructureBigEndian<T>(byte[] bytes) where T: struct { ??? }

    Read the article

  • Problem with apostrophes and other special characters when using aspell in windows

    - by Loftx
    Hi there, We seem to be having a problem with the spell checker on our content management system where it marks the ve part of We’ve as a misspelling. The spellchecker uses aspell which is called from a script on the server which executes the cmd.exe and uses it to pipe a file into aspell (it's a long winded way I know, but our server side programming langauge (ColdFusion) doesn't support writing to stdin for executables). Aspell is called by executing: c:\windows\system32\cmd.exe /c type d:\path_to_file\file.txt | "C:\Program Files\Aspell\bin\aspell" --lang=en -a Where file.txt contains the text to be spelled e.g. ^Oh have We’ve (the carat is added to prevent piping problems I believe). Aspell then output: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) * * * & ve 62 12: vie, voe, V, v, veg, vet, Be, Ce, be, Ev, E, e, vex, VA, VI, Va, Vi, vi, we, VD, VF, VG, VJ, VP, VT, Vt, vb, vs, DE, De, Fe, GE, Ge, He, IE, Le, ME, Me, NE, Ne, OE, PE, Re, SE, Se, Te, Xe, he, me, re, ye, Ave, Eve, Ive, ave, eve, VAR, var, veer, vier, view, vow However, we have a dev site, with the same version of Aspell, and when the same file is used it outputs with no misspellings. Both servers are running Aspell 0.50.3 on Windows server 2003, but there could be other differences in configuration: @(#) International Ispell Version 3.1.20 (but really Aspell 0.50.3) I'm wondering if the problem is to do with the piping part of the process or something different in the Aspell configuration. Does anyone have any ideas? Cheers, Tom

    Read the article

  • Usage of putty in command line from Hudson

    - by kij
    Hi, I'm trying to use putty in command line from an hudson job. The command is the following one: putty -ssh -2 -P 22 USERNAME@SERVER_ADDR -pw PASS -m command.txt Where 'command.txt' is a shell script to execute in the server through SSH. If i launch this command from the Window command prompt, it works, the shell script is executed on the server machine. If i launch a build of the hudson job configured with this batch command, it doesn't work. The build is running... and running... and running.. without doing anything, and i have to stop it manually. So my question is: Is it possible to launch an external programm (i.e. putty) from an hudson job ? ps: i tried SSH plugin but... not a really good plugin (pre/post build, fail status of the commands launched not caught by hudson, etc.) Thanks in advance for your help. Best regards. kij EDIT: These are the build logs: [workspace] $ cmd /c call C:\WINDOWS\TEMP\hudson7429256014041663539.bat C:\Hudson\jobs\Artifact deployer\workspace>putty -ssh -2 -P 22 USER@SERV_ADD -pw PASS -m com.txt Le build a été annulé Finished: ABORTED And the Hudson.err.log file at the same time (after a stop): 3 juin 2010 18:27:28 hudson.model.Run run INFO: Artifact deployer #6 aborted java.lang.InterruptedException at java.lang.ProcessImpl.waitFor(Native Method) at hudson.Proc$LocalProc.join(Proc.java:179) at hudson.Launcher$ProcStarter.join(Launcher.java:278) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:83) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.Build$RunnerImpl.build(Build.java:174) at hudson.model.Build$RunnerImpl.doRun(Build.java:138) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:416) at hudson.model.Run.run(Run.java:1241) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) My shell script only write "hello" in a "hello.txt" file on the server, and nothing is done.

    Read the article

  • Choosing an installer product that is free and will download/install the .NET Framework

    - by Coder7862396
    I'm currently using the Visual Studio Installer (Setup Project) in Visual Studio 2010 as the installer for MyProgram. It has some quirky bugs and is not very customizable so I would like to switch to another installer product. Here are my requirements: Must be free (and licensed for commercial use) Must install the Windows Installer 3.1 and .NET Framework 4.0 if the client doesn't have them The installer will download them if they are not available The code for detecting the .NET Framework and downloading it must be written by Microsoft (I do not want to have to update hard-coded URLs and registry keys in the future). I know that the Windows SDK includes a setup bootstrap that does this (C:\Program Files\Microsoft SDKs\Windows\v7.0A\Bootstrapper) In the future, when .NET Framework 5 is released and MyProgram uses it, no installer code will need to be changed, the updated installer product should see that MyProgram now uses the .NET Framework version 5 and will install that Here are my current choices: Visual Studio Installer: Automatically detects/downloads/installs Windows Installer and .NET Framework using a bootstrapper Setup.exe (Good!) Limited/buggy functionality (Uninstall shortcuts in the Start Menu cause empty folders to be left behind during uninstall, asking the user if they want a desktop shortcut requires a lot of work, etc.) NSIS: Doesn't natively support the .NET Framework so adding it as a prerequisite requires excessive coding, hardcoded URLS, etc. Inno Setup: Doesn't natively support the .NET Framework so adding it as a prerequisite requires excessive coding, hardcoded URLs, etc. WiX: Steep learning curve... not sure if I want to spend weeks learning it only to find out that it has the same uninstall problem as the Visual Studio Installer (because they both use MSI files) InstallShield LE 2010: Downloading it requires me to setup a fake email account to register just to download it. Then once it is installed it has to contact the company's servers and transmit some private information to them before I'm even allowed to try the free version. This is the most insidious form of DRM that there is and I will not accept it.

    Read the article

  • copy a text file in C#

    - by melt
    I am trying to copy a text file in an other text file line by line. It seems that there is a buffer of 1024 character. If there is less than 1024 character in my file, my function will not copie in the other file. Also if there is more than 1024 character but less a factor of 1024, these exceeding characters will not be copied. Ex: 2048 character in initial file - 2048 copied 988 character in initial file - 0 copied 1256 character in initial file - 1024 copied Thks! private void button3_Click(object sender, EventArgs e) { // écrire code pour reprendre le nom du fichier sélectionné et //ajouter un suffix "_poly.txt" string ma_ligne; const int RMV_CARCT = 9; //délcaration des fichier FileStream apt_file = new FileStream(textBox1.Text, FileMode.Open, FileAccess.Read); textBox1.Text = textBox1.Text.Replace(".txt", "_mod.txt"); FileStream mdi_file = new FileStream(textBox1.Text, FileMode.OpenOrCreate,FileAccess.ReadWrite); //lecture/ecriture des fichiers en question StreamReader apt = new StreamReader(apt_file); StreamWriter mdi_line = new StreamWriter(mdi_file, System.Text.Encoding.UTF8, 16); while (apt.Peek() >= 0) { ma_ligne = apt.ReadLine(); //if (ma_ligne.StartsWith("GOTO")) //{ // ma_ligne = ma_ligne.Remove(0, RMV_CARCT); // ma_ligne = ma_ligne.Replace(" ",""); // ma_ligne = ma_ligne.Replace(",", " "); mdi_line.WriteLine(ma_ligne); //} } apt_file.Close(); mdi_file.Close(); }

    Read the article

  • Save has_and_belongs_to_many link in basic RoR app

    - by Stéphane V
    I try to learn the has_and_belongs_to_many relationship between my 2 fresh new and simple models Product and Author, where a Product can have many authors and where author can have a lots of products. I wrote this : class Author < ActiveRecord::Base has_and_belongs_to_many :products end class Product < ActiveRecord::Base has_and_belongs_to_many :authors end In the partial form of view for the products, I have : <p>Products</p> <%= collection_select(:product, :author_ids, @authors, :id, :name, :prompt => " ", :multiple => true) %> but when I hit the update button, I get this strange message I can't resolve myself : NoMethodError in ProductsController#update undefined method `reject' for "1":String Rails.root: /home/stephane/www/HABTM Application Trace | Framework Trace | Full Trace app/controllers/products_controller.rb:63:in block in update' app/controllers/products_controller.rb:62:inupdate' Request Parameters: {"utf8"="✓", "_method"="put", "authenticity_token"="2GlTssOFjTVZ9BikrIFgx22cdTOIJuAB70liYhhLf+4=", "product"={"title"="Le trésor des Templiers", "original_title"="", "number"="1", "added_by"="", "author_ids"="1"}, "commit"="Update Product", "id"="1"} What's wrong ? Is there a problem with :product_ids... I saw on internet I had to pu a "s" but I'm not sure of what it represents.... How can I link the table authors_products to the key which is given back by the drop-down menu ? (here "author_ids"="1") Thx !

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159  | Next Page >