Search Results

Search found 2683 results on 108 pages for 'statistical analysis'.

Page 49/108 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • What does the R function `poly` really do?

    - by merlin2011
    I have read through the manual page ?poly (which I admit I did not completely comphrehend) and also read the description of the function in book Introduction to Statistical Learning. My current understanding is that a call to poly(horsepower, 2) should be equivalent to writing horsepower + I(horsepower^2). However, this seems to be contradicted by the output of the following code. library(ISLR) summary(lm(mpg~poly(horsepower,2), data=Auto))$coef summary(lm(mpg~horsepower+I(horsepower^2), data=Auto))$coef Output: Estimate Std. Error t value Pr(>|t|) (Intercept) 23.44592 0.2209163 106.13030 2.752212e-289 poly(horsepower, 2)1 -120.13774 4.3739206 -27.46683 4.169400e-93 poly(horsepower, 2)2 44.08953 4.3739206 10.08009 2.196340e-21 Estimate Std. Error t value Pr(>|t|) (Intercept) 56.900099702 1.8004268063 31.60367 1.740911e-109 horsepower -0.466189630 0.0311246171 -14.97816 2.289429e-40 I(horsepower^2) 0.001230536 0.0001220759 10.08009 2.196340e-21 My question is, why does the output not match, and what is poly really doing?

    Read the article

  • How to fix “SearchAdministration.aspx webpage cannot be found. 404”

    - by ybbest
    Problems: One of my colleague is having a wired issue today with Search Service Application in SharePoint2010.After he created the Search Service Application, he could not browse to the Search Administration (http://ybbest:5555/searchadministration.aspx?appid=6508b5cc-e19a-4bdc-89b3-05d984999e3c) ,he got 404 page not found every time he browse to the page. Analysis After some basic trouble-shooting, it turns out we can browse to any other page in the search application ,e.g. Manage Content Sources(/_admin/search/listcontentsources.aspx) or Manage Crawl Rules(/_admin/search/managecrawlrules.aspx).After some more research , we think some of the web parts in the Search Administration page might cause the problem. Solution You need to activate a hidden feature using #Enable-SPFeature SearchAdminWebParts -url <central admin URL> Enable-SPFeature SearchAdminWebParts -url http://ybbest:5555 If the feature is already enabled, you need to disable the feature first and then enable it. Disable-SPFeature SearchAdminWebParts -url http://ybbest:5555 Enable-SPFeature SearchAdminWebParts -url http://ybbest:5555 References: MSDN Forum

    Read the article

  • SQLAuthority News – Best Practices for Data Warehousing with SQL Server 2008 R2

    - by pinaldave
    An integral part of any BI system is the data warehouse—a central repository of data that is regularly refreshed from the source systems. The new data is transferred at regular intervals  by extract, transform, and load (ETL) processes. This whitepaper talks about what are best practices for Data Warehousing. This whitepaper discusses ETL, Analysis, Reporting as well relational database. The main focus of this whitepaper is on mainly ‘architecture’ and ‘performance’. Download Best Practices for Data Warehousing with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Fix “Error occurred in deployment step ‘Activate Features’: System.TimeoutException:”

    - by ybbest
    Problem: When deploying a SharePoint2013 workflow using Visual Studio, I got the following Error: Error occurred in deployment step ‘Activate Features’: System.TimeoutException: The HTTP request has timed out after 20000 milliseconds. —> System.Net.WebException: The request was aborted: The request was canceled. at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.OnGotResponse(IAsyncResult result) — End of inner exception stack trace — at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result) at Microsoft.Workflow.Client.Ht Analysis: After reading AC’s blogpost and I find out the issue is to do with the service bus. Then I found out the following services are not started Solution: So I start the Service Bus Gateway and Service Bus Message Broker and the problem goes away. References: SharePoint 2013 Workflow – Advanced Workflow Debugging with Fiddler

    Read the article

  • Discoverer 11g 11.1.1.2 Certified with EBS 12 on Five New Platforms

    - by Steven Chan
    Oracle Fusion Middleware 11g Release 1 includes Oracle Discoverer.  Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.We certified Discoverer 11gR1 11.1.1.2 with the E-Business Suite Release 11i and 12 on Linux earlier this year.  Our Applications Platforms Group has just released five additional platform certifications for Discoverer 11.1.1.2 for Oracle E-Business Suite Release 12 (12.0.x and 12.1.x).Certified EBS 12 PlatformsLinux x86-64 (Oracle Enterprise Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 10) Oracle Solaris on SPARC (64-bit) (Solaris 9, 10) HP-UX Itanium (11.23, 11.31) HP-UX PA-RISC (64-bit) (11.23, 11.31) IBM AIX on Power Systems (64-bit) (5.3, 6.1) Microsoft Windows Server (32-bit) (2003, 2008)

    Read the article

  • Excel 2010 & SSAS – Search Dimension Members

    - by Davide Mauri
    Today I’ve connected my Excel 2010 to an Analysis Services 2008 Cube and I got a very nice (and unexpected) surprise! It’s now finally possibly to search and filter Dimension Members directly from the combo box window: As you can easily imagine, for medium/big dimensions is really – really – really useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • How to fix “Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime” in PowerGUI

    - by ybbest
    Today, when I try to run some PowerShell command against SharePoint in PowerGUI , I encounter some error message as below: Problem: Remove-SPSite : Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime. At C:\SiteCreation.ps1:37 char:14 + CategoryInfo : InvalidData: (Microsoft.Share…mdletRemoveSite:SPCmdletRemoveSite) [Remove-SPSite], PlatformNotSupportedException Analysis: The error message is pretty clear that PowerGUI try to run the PowerShell command under .Net version 4.0 which is not supported by SharePoint2010, SharePoint2010 only support .Net 3.5.So how can I change the settings so that PowerShell does run under .Net3.5 in PowerGui? The solution is pretty easy. Solution: 1. Open your windows explorer and navigate to C:\Program Files (x86)\PowerGUI\ and open the configuration file ScriptEditor.exe.config. 2. Change the supportedRuntime version under Startup settings by removing the version=”v4.0″ as below From To   3. Restart your PowerGUI and rerun your script. It works like a charm.

    Read the article

  • How to fix “Collection was modified; enumeration operation may not execute. “when using Copy.asmx in SharePoint2010

    - by ybbest
    Problem: In my current project, we use copy.asmx web services in SharePoint2010 to upload a document to a document library. In one of the document library, when uploading a document with a choice field, it blows up and the error message is Collection was modified; enumeration operation may not execute. Analysis: After some research , we found out the problem is that we have 2 content types that both have a field called Document Type , although the internal name are different but the display name are exactly the same and this cause the problem. I am still not too sure why, it could possibly a SharePoint bug. Solution: After rename one of the field display name to a different name, it works like a charm. You can download source code with the problem here and source code without the problem here.

    Read the article

  • New Supply Chain, S&OP, & TPM Analyst Reports from Gartner, IDC Now Available

    - by Mike Liebson
    Check out these analyst reports Oracle has recently made available for customers and partners on Oracle.com: Gartner:  MarketScope for Stage 3 Sales and Operations Planning  -  Gartner lead supply chain planning analyst, Tim Payne, discusses the evolving definition of S&OP, the Gartner S&OP maturity model, and recommendations for selecting S&OP technology solutions. Gartner: Vendor Panorama for Trade Promotion Management in Consumer Goods  -  Consumer goods analyst, Dale Hagemeyer, presents an overview of the TPM market, followed by an analysis of vendor offerings. IDC:  Perspective: Oracle OpenWorld 2012 — Supply Chain as a Focus  -  Supply chain analyst, Simon Ellis, discusses supply chain highlights from the October OpenWorld conference. Value Chain Planning highlights include the VCP product roadmap and demand sensing presentations by Electronic Arts (Demantra) and Sony (Demand Signal Repository). For a complete set of analyst reports, visit here.

    Read the article

  • How to fix “Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime” in PowerGUI

    - by ybbest
    Today, when I try to run some PowerShell command against SharePoint in PowerGUI , I encounter some error message as below: Problem: Remove-SPSite : Microsoft SharePoint is not supported with version 4.0.30319.225 of the Microsoft .Net Runtime. At C:\SiteCreation.ps1:37 char:14 + CategoryInfo : InvalidData: (Microsoft.Share…mdletRemoveSite:SPCmdletRemoveSite) [Remove-SPSite], PlatformNotSupportedException Analysis: The error message is pretty clear that PowerGUI try to run the PowerShell command under .Net version 4.0 which is not supported by SharePoint2010, SharePoint2010 only support .Net 3.5.So how can I change the settings so that PowerShell does run under .Net3.5 in PowerGui? The solution is pretty easy. Solution: 1. Open your windows explorer and navigate to C:\Program Files (x86)\PowerGUI\ and open the configuration file ScriptEditor.exe.config. 2. Change the supportedRuntime version under Startup settings by removing the version=”v4.0″ as below From To   3. Restart your PowerGUI and rerun your script. It works like a charm.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • What’s Your Tax Strategy? Automate the Tax Transfer Pricing Process!

    - by tobyehatch
    Does your business operate in multiple countries? Well, whether you like it or not, many local and international tax authorities inspect your tax strategy.  Legal, effective tax planning is perceived as a “moral” issue. CEOs are being asked to testify on their process of tax transfer pricing between multinational legal entities.  Marc Seewald, Senior Director of Product Management for EPM Applications specializing in all tax subjects and Product Manager for Oracle Hyperion Tax Provisioning, and Bart Stoehr, Senior Director of Product Strategy for Oracle Hyperion Profitability and Cost Management joined me for a discussion/podcast on this interesting subject.  So what exactly is “tax transfer pricing”? Marc defined it this way. “Tax transfer pricing is a profit allocation methodology required to be used by multinational corporations. Specifically, the ultimate goal of the transfer pricing is to ensure that the global multinational pays their fair share of income tax in each of their local markets. Specifically, it prevents companies from unfairly moving profit from ‘high tax’ countries to ‘low tax’ countries.” According to Marc, in today’s global economy, profitability can be significantly impacted by goods and services exchanged between the related divisions within a single multinational company.  To ensure that these cost allocations are done fairly, there are rules that govern the process. These rules ensure that intercompany allocations fairly represent the actual nature of the businesses activity- as if two divisions were unrelated - and provide a clear audit trail of how the costs have been allocated to prove that allocations fall within reasonable ranges.  What are the repercussions of improper tax transfer pricing? How important is it? Tax transfer pricing allocations can materially impact the amount of overall corporate income taxes paid by a company worldwide, in some cases by hundreds of millions of dollars!  Since so much tax revenue is at stake, revenue agencies like the IRS, and international regulatory bodies like the Organization for Economic Cooperation and Development (OECD) are pushing to reform and clarify reporting for tax transfer pricing. Most recently the OECD announced an “Action Plan for Base Erosion and Profit Shifting”. As Marc explained, the times are changing and companies need to be responsive to this issue. “It feels like every other week there is another company being accused of avoiding taxes,” said Marc. Most recently, Caterpillar was accused of avoiding billions of dollars in taxes. In the last couple of years, Apple, GE, Ikea, and Starbucks, have all been accused of tax avoidance. It’s imperative that companies like these have a clear and auditable tax transfer process that enables them to justify tax transfer pricing allocations and avoid steep penalties and bad publicity. Transparency and efficiency are what is needed when it comes to the tax transfer pricing process. Bart explained that tax transfer pricing is driving a deeper inspection of profit recognition specifically focused on the tax element of profit.  However, allocations needed to support tax profitability are nearly identical in process to allocations taking place in other parts of the finance organization. For example, the methods and processes necessary to arrive at tax profitability by legal entity are no different than those used to arrive at fully loaded profitability for a product line. In fact, there is a great opportunity for alignment across these two different functions.So it seems that tax transfer pricing should be reflected in profitability in general. Bart agreed and told us more about some of the critical sub-processes of an overall tax transfer pricing process within the Oracle solution for tax transfer pricing.  “First, there is a ton of data preparation, enrichment and pre-allocation data analysis that is managed in the Oracle Hyperion solution. This serves as the “data staging” to the next, critical sub-processes.  From here, we leverage the Oracle EPM platform’s ability to re-use dimensions and legal entity driver data and financial data with Oracle Hyperion Profitability and Cost Management (HPCM).  Within HPCM, we manage the driver data, define the legal entity to legal entity allocation rules (like cost plus), and have the option to test out multiple, simultaneous tax transfer pricing what-if scenarios.  Once processed, a tax expert can evaluate the effectiveness of any one scenario result versus another via a variance analysis configured with HPCM’s pre-packaged reporting capability known as Oracle Hyperion SmartView for Office.”   Further, Bart explained that the ability to visibly demonstrate how a cost or revenue has been allocated is really helpful and auditable.  “HPCM’s Traceability Maps are that visual representation of all allocation flows that have been executed and is the tax transfer analyst’s best friend in maintaining clear documentation for tax transfer pricing audits. Simply click and drill as you inspect the chain of allocation definitions and results. Once final, the post-allocated tax data can be compared to the GL to create invoices and journal entries for posting to your GL system of choice.  Of course, there is a framework for overall governance of the journal entries, allocation percentages, and reporting to include necessary approvals.” Lastly, Marc explained that the key value in using the Oracle Hyperion solution for tax transfer pricing is that it keeps everything in alignment in one single place. Specifically, Oracle Hyperion effectively becomes the single book of record for the GAAP, management, and the tax set of books. There are many benefits to having one source of the truth. These include EFFICIENCY, CONTROLS and TRANSPARENCY.So, what’s your tax strategy? Why not automate the tax transfer pricing process!To listen to the entire podcast, click here.To learn more about Oracle Hyperion Profitability and Cost Management (HPCM), click here.

    Read the article

  • Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6

    - by [email protected]
    In a Java Champion interview Matjaz Juric of Slovenia, head of the Cloud Computing and SOA Competence Centre at the University of Maribor, and professor at the University of Ljubljana, shares insights about cloud computing, SOA and Java EE 6. Juric has worked on performance analysis and optimization of RMI-IIOP, as well as being a member of the BPEL Advisory Board, and a Java mentor and trainer.Regarding BPEL he remarks, "Probably the most important thing to understand is what should be programmed in Java and what should be programmed in BPEL. There is still some confusion. BPEL is for the process logic, while Java is for functionalities. Together, BPEL and Java form a strong alliance and enable faster development and maintenance of enterprise applications and their integrations. On the other hand, the integration between Java and BPEL could be improved. There have been different approaches, including Java snippets. I would like to see an XML data type in Java, without all the hassles with JAXB, mappings, or DOM." Read the rest of the article here.

    Read the article

  • Please recommend citations for source code documentation standards

    - by Aerik
    I'm trying to convince another group in my company that they need to provide more documentation in their source code (they want to hand off the code to my group) but they're treating it as a "nice to have". In my view, it's a necessity. I've run a source code analysis tool and it's showing about 10% comment lines - but looking at the source code, most of that is coming from entire functions that the author has commented out. Can anyone provide some authoritative citations / references for documentation / comment standards for source code? (In case it matters, we're a C# house, with a little Matlab thrown in).

    Read the article

  • Connect Microsoft Excel To SQL Azure Database

    A number of people have found my post about getting started with SQL Azure pretty useful. But, it's all worthless if it doesn't add up to user value. Database are like potential energy in physics-it's a promise that something could be put in motion. Users actually making decisions based on analysis is analogous to kinetic energy in physics. It's the fulfillment of the promise of potential energy. So what does this have to with Office 2010? In Excel 2010 we made it truly easy to connect to a SQL Azure...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Upcoming Webcast: “Supporting References” In Release 12 - SLA

    - by Oracle_EBS
    ADVISOR WEBCAST: “Supporting References” In Release 12 - SLAPRODUCT FAMILY: Receivables, Payables, General Ledger April 18, 2012 at 14:00 UK / 15:00 CET / 06:00 am Pacific / 7:00 am Mountain / 9:00 am Eastern "Supporting References” enables users to enter additional information for the “Journal Entry Header” and “Journal Entry Lines” that can be used for analytical purposes. This functionality was earlier known as “Analytical Criteria”. In 11i, additional data was interfaced to GL on the journals using “Descriptive Flexfields” whereas using this feature in R12, customers can create customized sources as ‘Supporting References’ and pass values into those sources from Subledgers like AP / AR / PA, etc. TOPICS WILL INCLUDE: Supporting References Business information about a subledger journal entry at the header or line level Establishing a subledger balance for a particular source value or combination of source values for a particular account To assist with reconciliation of account balances Financial and managerial analysis A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • What are the strategies to become a good open source developer?

    - by u3050
    I always hear that involving with open source projects is good for career and the more (good) open source you release, the closer you will be to getting your dream job before you've even had an interview.I am expert in Java and I am trying to become fluent in Scala. I always think about getting involved in open source development in Java/Scala but the following confusions stopping me to do so. How/where do I start in open source development projects in GitHub etc? Or How to find active/busy open source development projects? How to find an area where improvement is required or enhancement required in such projects? It looks too complex in the first analysis or its pretty hard to find such opportunities. What are the common strategies to follow if I want to become hobbyist/free time open source developer?People who have experience in open source development please share your learnings/expertise from scratch.Thanks in advance.

    Read the article

  • Finding a problem in some task [closed]

    - by nagisa
    Recently I competed in nation wide programming contest finals. Not unexpectedly all problems were algorithmic. I lost (40 points out of 600. Winner got ~300). I know why I lost very well - I don't know how to find actual problem in those obfuscated tasks which are life-blood of every competition. I think that being self-taught and not well versed in algorithms got me too. As side effect of learning things myself I know how to search for information, however all I could find are couple questions about learning algorithms. For now I put Python Algorithms: Mastering Basic Algorithms in the Python Language and Analysis of Algorithms which I found in those questions to my "to read" list. That leaves my first problem of not knowing how to find a problem unsolved. Will that ability come with learning algorithms? Or does it need some special attention? Any suggestions are welcomed.

    Read the article

  • Google I/O 2010 - The joys of engineering leadership

    Google I/O 2010 - The joys of engineering leadership Google I/O 2010 - How to lose friends and alienate people: The joys of engineering leadership Tech Talks Brian W. Fitzpatrick, Ben Collins-Sussman Are you considered the 'point' person for your team? Do you have sweaty palms, headaches, and a calendar full of meetings? You may have an affliction called 'manager'. This is treatable through careful analysis and therapy. We'll examine how you may have arrived at this state and how you can once again regain your self-respect and the respect of your peers. Hear real-life stories of both good and bad leadership. Learn to lead by following. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 6 0 ratings Time: 56:02 More in Science & Technology

    Read the article

  • How to fix “Unable to cast COM object of type ‘Microsoft.SharePoint.Library.SPRequestInternalClass’ to interface type ‘Microsoft.SharePoint.Library.ISPRequest” using PowerGUI

    - by ybbest
    I got the error today when debugging some of my PowerShell Script in PowerGUI. The script works perfectly fine in PowerShell console. Then I had spent a couple of hours scratching my head, trying to figure out why. It turns out that the PowerShell Variables Panel causes the problem. Not quite sure why, but collapse the panel fix the problem. Problem: It throws the following exception when debugging my PowerShell Script. Analysis: It turns out that the PowerShell Variables Panel causes the problem. I assume it calls some function to grab value of some of variables which cause the problems. Solution: Collapse or Close the variables panel fix the problem

    Read the article

  • Application Lifecycle Management with Visual Studio 2010 – Wrox Book

    - by Guy Harwood
    After running with a somewhat disconnected set of tools (vs 2008, Ontime, sharepoint 2007) for managing our projects we decided to make the move to Team Foundation Server 2010.  With limited coverage of the product available online i went in search of a book and found this… View this book on the Wrox website I must point out that i have only read 10 of the 26 chapters so far, mainly the ones that cover source code control, work item tracking and database projects.  This enables our dev team to get familiar with it before switching project management over at a future date. Needless to say i am very impressed with the detail it provides, answering pretty much every question i had about TFS so far.  I'm looking forward to digging into the sections on testing, code analysis and architecture. Highly recommended.

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >