Search Results

Search found 15499 results on 620 pages for 'non obvious'.

Page 159/620 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • PHP code works on Chrome, but not on Firefox or IE (send email via HTML form) [on hold]

    - by Cachirro
    My brother has this form: <form id="lista" action="lista2.php" method="post"> <input name="cf_name" type="text" size="50" hidden="yes" class="obscure"> <input name="cf_email" type="text" size="50" hidden="yes" class="obscure"> <textarea name="cf_message" cols="45" rows="10" hidden="yes" class="obscure"> </textarea> <input type="image" name="submit" value="Enviar Lista por Email" src="imagens/lista_email.png" width="40" height="40" onclick="this.form.elements['cf_message'].value = lista_mail;this.form.elements['cf_name'].value = prompt('Escreva o seu nome:', '');this.form.elements['cf_email'].value = prompt('Escreva o seu email:', '');"> <input name="submit2" type="submit" value="Enviar" hidden="yes" class="obscure"> </form> That calls this PHP file: <?php if ( isset($_POST['submit']) ) { // Dados de autenticacao SMTP $smtpinfo['host'] = 'localhost'; $smtpinfo['port'] = '25'; $smtpinfo['auth'] = true; $smtpinfo['username'] = 'xxx'; $smtpinfo['password'] = 'xxx'; // Dados recebidos do formulario $nome = $_POST['cf_name']; $email = $_POST['cf_email']; $mensagem = $_POST['cf_message']; // Inclusão de ficheiro PEAR. Certifique-se que o PEAR está activado no seu alojamento require_once "Mail.php"; // Corpo da mensagem $body = "Nome: ".$nome; $body.= "\n\n"; $body.= nl2br($mensagem); $headers = array ('From' => $email, 'To' => $smtpinfo["username"], 'Subject' => 'Encomenda Website'); $mail_object = Mail::factory('smtp', $smtpinfo); $mail = $mail_object->send($smtpinfo["username"], $headers, $body); if ( PEAR::isError($mail) ) { echo ("<p>" . $mail->getMessage() . "</p>"); } else { echo ('<b><font color="FFFF00">Mensagem enviada com sucesso.<br><br></b>Seu email: ' . $email . '<br><br></font>'); }} ?> This basically sends an email with some selected products, name and email. The problem is that it works perfectly on Chrome, but not on FF or IE. When the submit image is pressed, the URL changes to the PHP file, but it displays a blank page. After display errors activated: ini_set('display_errors',1); ini_set('display_startup_errors',1); error_reporting(-1) FF/IE display blank page and email isn't sent, Chrome sends the email and displays this: Strict Standards: Non-static method Mail::factory() should not be called statically in /var/www/vhosts/[site url]/httpdocs/lista2.php on line 33 Strict Standards: Non-static method PEAR::isError() should not be called statically, assuming $this from incompatible context in /usr/share/php/Mail/smtp.php , dont know if it helps So, what is causing the email to be sent on chrome and not on FF or IE? Thank you.

    Read the article

  • Why won't my code work in Ubuntu Server 11.10? Is it because of gd library?

    - by Derrick
    I get this error when running the following code: No such file found at "widgets/104-text.png" I know that the code works because it works on my other non-ubuntu server. I do not know if it is gd library or what. I tried both the bundled version and the non-bundled and both do not make this code work. $con = mysql_connect("localhost","user","abc123"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("satabase_name", $con); $productid2 = $this->product->id; $thename = mysql_query("SELECT * FROM pshop_product_lang WHERE id_product = '$productid2' LIMIT 1"); $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png'); $getimg110 = mysql_query("SELECT * FROM pshop_image WHERE id_product = '$productid2'"); $gotimg110 = mysql_fetch_array($getimg110); $slash110 = addcslashes($gotimg110[id_image], '\0..\999999999999999999999'); $str110 = str_replace('\\', '/', $slash110); $newimg110 = '<img src="img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg" />'; include("conf.inc.php"); include('ImageWorkshop.php'); // Initialization of layer you need $pinguLayer = new ImageWorkshop(array( 'imageFromPath' => 'widgets/background.png', )); $pinguLayer2 = new ImageWorkshop(array( 'imageFromPath' => 'img/p' . $str110 . '/' . $gotimg110[id_image] . '-large_default.jpg', )); $pinguLayer3 = new ImageWorkshop(array( 'imageFromPath' => 'widgets/' . $productid2 . '-text.png', )); // resize pingu layer $thumbWidth2 = 150; // px $thumbHeight2 = 150; $thumbWidth = 400; // px $thumbHeight = 200; $pinguLayer2->resizeInPixel($thumbWidth2, $thumbHeight2); $pinguLayer->resizeInPixel($thumbWidth, $thumbHeight); // Add 2 layers on pingu layer $pinguLayer->addLayerOnTop($pinguLayer2, null, null, 'LM'); $pinguLayer->addLayerOnTop($pinguLayer3, 70, 25, 'MM'); // Saving the result in a folder $pinguLayer->save("widgets/", $productid2 . ".gif", true, null, 95); The file path is correct, however this part of the code is not creating the image as it is supposed to: $thename2 = mysql_fetch_array($thename); $string2 = $thename2['name']; $string = (strlen($string2) > 25) ? substr($string2, 0, 25) . '...' : $string2; $font = 4; $width = imagefontwidth($font) * strlen($string); $height = imagefontheight($font); $image = imagecreatetruecolor ($width,$height); $white = imagecolorallocate ($image,255,255,255); $black = imagecolorallocate ($image,0,0,0); imagefill($image,0,0,$white); imagestring($image,$font,0,0,$string, $black); imagepng($image, 'widgets/' . $productid2 . '-text.png');

    Read the article

  • edited and reversed changes on .htaccess - site starts redirecting to .comindex.php/

    - by Aurigae
    Site is a Joomla 2.5 site. I wanted to add a non www to www redirect to the htaccess file, did so, then the redirection went mad, reversed but still the site redirects. When i click view site in admin panel, i get linked to http://domain.comindex.php/ The website is http://www.domain.com Visiting the website URL works without www, but once you click on projects it acts mad too. Projects is managed with joomshopping extension. EDIT: the redirect also happens when rewrite is deactivated in admin panel. ## # @package Joomla # @copyright Copyright (C) 2005 - 2012 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. Redirect 301 /index.html /index.php Redirect 301 /services /project Redirect 301 /projects/projects.html /project Redirect 301 /projects/project1.html /project Redirect 301 /projects/project2.html /project Redirect 301 /projects /project Redirect 301 /keypersonnel.html /about-agrin/keystaff Redirect 301 /cooperation.htm /about-agrin/intcoop Redirect 301 /member.html /about-agrin/memberships Redirect 301 /contact.html /contacts Redirect 301 /hr.htm /jobs Redirect 301 /index.php/404 /index.php

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • ORA-4031 Troubleshooting Tool ???

    - by Takeyoshi Sasaki
    ORA-4031 ???????????????????? SGA ????????????(??????)??????????????????????????????????????????????????? ORA-4031 ??????????????????? ORA-4031 Troubleshooting Tool ??????????? ORA-4031 Troubleshooting Tool ?? ORA-4031 Troubleshooting Tool ? ORA-4031 ????????? ORA-4031 ???????????????????????????????????WEB????????????????????????????????My Oracle Support ??????????????????????? ORA-4031 ??????????????????????????????ORA-4031 ?????????????????????????????? ORA-4031 Troubleshooting Tool ???????? My Oracle Support ?????? Diagnostic Tools Catalog ??  ORA-4031 Troubleshooting Tool ???????????????? ORA-4031 Troubleshooting Tool ??????????????? ORA-4031 Troubleshooting Tool ????? ???2??????????????? ORA-4031 ?????????????????????????????ORA-4031 Troubleshooting Tool ????????????????????????????????????????ORA-4031 ???????????????????? ??????????????? ORA-4031???????????? ????????????? ORA-4031 ?????? AWR???????????????????????????????????????????????????????????? ????·???????·???????????? ORA-4031 ?????????? ????????? SR ?????????????????????????? [ADR] ????·??????·?????????? [10g ???] AWR?????(STATSPACK????)???? ?????????? ORA-4031 ????????????????????????????????????????????? ORA-4031 ?????????????? ORA-4031 ??????1?1??????????? ORA-4031 Troubleshooting Tool ???????????(??????????????)???????? ORA-4031 ????????? ???????????????????????????????????????????????? ??????????????????????????????????????1)High Session_Cached_Cursor Setting Causing Excessive Consumption of Shared Pool???SESSION_CACHED_CURSOR ??????????????????????????????????????????????2)Insufficient SGA Free Memory at StartupThis issue could occur if in the init.ora parameters of your Alert log, (shared_pool_size + large_pool_size + java_pool_size + db_keep_cache_size + streams_pool_size + db_cache_size) / sga_target is greater than 90%.????????????????????????? ?????????????? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size, db_cache_size ????? /sga_target ??? 90% ??????????????????sga_target ??? memory_target ??????????????????????????????????????????????????????????????????? shared_pool_size ???????????????????????????????????????????????????????????????????????(?????????)? sga_target ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????1)In your Alert log,* Look for parameters under "System parameters with non-default values:". If session_cached_cursor * 2000 / shared_pool_size is greater than 10%, then session_cached_cursors are consuming significant shared_pool_size.??? ????????????? "System parameters with non-default values:" ????  session_cached_cursor * 2000 ??? shared_pool_size ? 10% ???????????????????????? ???2) In your Alert log, SGA Utilization (Sum of shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size over sga_target) is 99%, which might be too high. ??? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size ???? sga_target ? 99% ???????????????????? ?????????????????????????????? My Oracle Support ??????????????????????????1)Decrease the parameter SESSION_CACHED_CURSORSSESSION_CACHED_CURSORS ???????????????????????2)Reduce the minimum values for the dynamic SGA components to allow memory manager to make changes as neededSGA ?????????????????(???)?????????????????? 2????????????????????? ORA-4031 Trobuleshooting Tool ????????????????????????????????????????? ORA-4031 Troubleshooting Tool ?????? ORA-4031 ??????????????????????ORA-4031 Troubleshooting Tool ???????????????????????????????????????????????????????ORA-4031 ????????????????????????????ORA-4031 ??????????????? ORA-4031 Troubleshooting Tool ?????????

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • Solving Diophantine Equations Using Python

    - by HARSHITH
    In mathematics, a Diophantine equation (named for Diophantus of Alexandria, a third century Greek mathematician) is a polynomial equation where the variables can only take on integer values. Although you may not realize it, you have seen Diophantine equations before: one of the most famous Diophantine equations is: We are not certain that McDonald's knows about Diophantine equations (actually we doubt that they do), but they use them! McDonald's sells Chicken McNuggets in packages of 6, 9 or 20 McNuggets. Thus, it is possible, for example, to buy exactly 15 McNuggets (with one package of 6 and a second package of 9), but it is not possible to buy exactly 16 nuggets, since no non- negative integer combination of 6's, 9's and 20's adds up to 16. To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n. Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n"

    Read the article

  • JavaMail not sending Subject or From under jetty:run-war

    - by Jason Thrasher
    Has anyone seen JavaMail not sending proper MimeMessages to an SMTP server, depending on how the JVM in started? At the end of the day, I can't send JavaMail SMTP messages with Subject: or From: fields, and it appears other headers are missing, only when running the app as a war. The web project is built with Maven and I'm testing sending JavaMail using a browser and a simple mail.jsp to debug and see different behavior when launching the app with: 1) mvn jetty:run (mail sends fine, with proper Subject and From fields) 2) mvn jetty:run-war (mail sends fine, but missing Subject, From, and other fields) I've meticulously run diff on the (verbose) Maven debug output (-X), and there are zero differences in the runtime dependencies between the two. I've also compared System properties, and they are identical. Something else is happening the jetty:run-war case that changes the way JavaMail behaves. What other stones need turning? Curiously, I've tried a debugger in both situations and found that the javax.mail.internet.MimeMessage instance is getting created differently. The webapp is using Spring to send email picked off of an Apache ActiveMQ queue. When running the app as mvn jetty:run the MimeMessage.contentStream variable is used for message content. When running as mvn jetty:run-war, the MimeMessage.content variable is used for the message contents, and the content = ASCIIUtility.getBytes(is); call removes all of the header data from the parsed content. Since this seemed very odd, and debugging Spring/ActiveMQ is a deep dive, I created a simplified test without any of that infrastructure: just a JSP using mail-1.4.2.jar, yet the same headers are missing. Also of note, these headers are missing when running the WAR file under Tomcat 5.5.27. Tomcat behaves just like Jetty when running the WAR, with the same missing headers. With JavaMail debugging turned on, I clearly see different output. GOOD CASE: In the jetty:run (non-WAR) the log output is: DEBUG: JavaMail version 1.4.2 DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name: {com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc] DEBUG SMTP: useEhlo true, useAuth true DEBUG SMTP: trying to connect to host "mail.authsmtp.com", port 465, isSSL false 220 mail.authsmtp.com ESMTP Sendmail 8.14.2/8.14.2/Kp; Thu, 18 Jun 2009 01:35:24 +0100 (BST) DEBUG SMTP: connected to host "mail.authsmtp.com", port: 465 EHLO jmac.local 250-mail.authsmtp.com Hello sul-pubs-3a.Stanford.EDU [171.66.201.2], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 52428800 250-AUTH CRAM-MD5 DIGEST-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "SIZE", arg "52428800" DEBUG SMTP: Found extension "AUTH", arg "CRAM-MD5 DIGEST-MD5 LOGIN PLAIN" DEBUG SMTP: Found extension "DELIVERBY", arg "" DEBUG SMTP: Found extension "HELP", arg "" DEBUG SMTP: Attempt to authenticate DEBUG SMTP: check mechanisms: LOGIN PLAIN DIGEST-MD5 AUTH LOGIN 334 VXNlcm5hjbt7 YWM0MDkwhi== 334 UGFzc3dvjbt7 YXV0aHNtdHAydog3 235 2.0.0 OK Authenticated DEBUG SMTP: use8bit false MAIL FROM:<[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO:<[email protected]> 250 2.1.5 <[email protected]>... Recipient ok DEBUG SMTP: Verified Addresses DEBUG SMTP: Jason Thrasher <[email protected]> DATA 354 Enter mail, end with "." on a line by itself From: Webmaster <[email protected]> To: Jason Thrasher <[email protected]> Message-ID: <[email protected]> Subject: non-Spring: Hello World MIME-Version: 1.0 Content-Type: text/plain;charset=UTF-8 Content-Transfer-Encoding: 7bit Hello World: message body here . 250 2.0.0 n5I0ZOkD085654 Message accepted for delivery QUIT 221 2.0.0 mail.authsmtp.com closing connection BAD CASE: The log output when running as a WAR, with missing headers, is quite different: Loading javamail.default.providers from jar:file:/Users/jason/.m2/repository/javax/mail/mail/1.4.2/mail-1.4.2.jar!/META-INF/javamail.default.providers DEBUG: loading new provider protocol=imap, className=com.sun.mail.imap.IMAPStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=imaps, className=com.sun.mail.imap.IMAPSSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtp, className=com.sun.mail.smtp.SMTPTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtps, className=com.sun.mail.smtp.SMTPSSLTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3, className=com.sun.mail.pop3.POP3Store, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3s, className=com.sun.mail.pop3.POP3SSLStore, vendor=Sun Microsystems, Inc, version=null Loading javamail.default.providers from jar:file:/Users/jason/Documents/dev/subscribeatron/software/trunk/web/struts/target/work/webapp/WEB-INF/lib/mail-1.4.2.jar!/META-INF/javamail.default.providers DEBUG: loading new provider protocol=imap, className=com.sun.mail.imap.IMAPStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=imaps, className=com.sun.mail.imap.IMAPSSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtp, className=com.sun.mail.smtp.SMTPTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=smtps, className=com.sun.mail.smtp.SMTPSSLTransport, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3, className=com.sun.mail.pop3.POP3Store, vendor=Sun Microsystems, Inc, version=null DEBUG: loading new provider protocol=pop3s, className=com.sun.mail.pop3.POP3SSLStore, vendor=Sun Microsystems, Inc, version=null DEBUG: getProvider() returning provider protocol=smtp; type=javax.mail.Provider$Type@98203f; class=com.sun.mail.smtp.SMTPTransport; vendor=Sun Microsystems, Inc DEBUG SMTP: useEhlo true, useAuth false DEBUG SMTP: trying to connect to host "mail.authsmtp.com", port 465, isSSL false 220 mail.authsmtp.com ESMTP Sendmail 8.14.2/8.14.2/Kp; Thu, 18 Jun 2009 01:51:46 +0100 (BST) DEBUG SMTP: connected to host "mail.authsmtp.com", port: 465 EHLO jmac.local 250-mail.authsmtp.com Hello sul-pubs-3a.Stanford.EDU [171.66.201.2], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-SIZE 52428800 250-AUTH CRAM-MD5 DIGEST-MD5 LOGIN PLAIN 250-DELIVERBY 250 HELP DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg "" DEBUG SMTP: Found extension "PIPELINING", arg "" DEBUG SMTP: Found extension "8BITMIME", arg "" DEBUG SMTP: Found extension "SIZE", arg "52428800" DEBUG SMTP: Found extension "AUTH", arg "CRAM-MD5 DIGEST-MD5 LOGIN PLAIN" DEBUG SMTP: Found extension "DELIVERBY", arg "" DEBUG SMTP: Found extension "HELP", arg "" DEBUG SMTP: Attempt to authenticate DEBUG SMTP: check mechanisms: LOGIN PLAIN DIGEST-MD5 AUTH LOGIN 334 VXNlcm5hjbt7 YWM0MDkwhi== 334 UGFzc3dvjbt7 YXV0aHNtdHAydog3 235 2.0.0 OK Authenticated DEBUG SMTP: use8bit false MAIL FROM:<[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO:<[email protected]> 250 2.1.5 <[email protected]>... Recipient ok DEBUG SMTP: Verified Addresses DEBUG SMTP: Jason Thrasher <[email protected]> DATA 354 Enter mail, end with "." on a line by itself Hello World: message body here . 250 2.0.0 n5I0pkSc090137 Message accepted for delivery QUIT 221 2.0.0 mail.authsmtp.com closing connection Here's the actual mail.jsp that I'm testing war/non-war with. <%@page import="java.util.*"%> <%@page import="javax.mail.internet.*"%> <%@page import="javax.mail.*"%> <% InternetAddress from = new InternetAddress("[email protected]", "Webmaster"); InternetAddress to = new InternetAddress("[email protected]", "Jason Thrasher"); String subject = "non-Spring: Hello World"; String content = "Hello World: message body here"; final Properties props = new Properties(); props.setProperty("mail.transport.protocol", "smtp"); props.setProperty("mail.host", "mail.authsmtp.com"); props.setProperty("mail.port", "465"); props.setProperty("mail.username", "myusername"); props.setProperty("mail.password", "secret"); props.setProperty("mail.debug", "true"); props.setProperty("mail.smtp.auth", "true"); props.setProperty("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.setProperty("mail.smtp.socketFactory.fallback", "false"); Session mailSession = Session.getDefaultInstance(props); Message message = new MimeMessage(mailSession); message.setFrom(from); message.setRecipient(Message.RecipientType.TO, to); message.setSubject(subject); message.setContent(content, "text/plain;charset=UTF-8"); Transport trans = mailSession.getTransport(); trans.connect(props.getProperty("mail.host"), Integer .parseInt(props.getProperty("mail.port")), props .getProperty("mail.username"), props .getProperty("mail.password")); trans.sendMessage(message, message .getRecipients(Message.RecipientType.TO)); trans.close(); %> email was sent SOLUTION: Yes, the problem was transitive dependencies of Apache CXF 2. I had to exclude geronimo-javamail_1.4_spec from the build, and just rely on javax's mail-1.4.jar. <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxws</artifactId> <version>2.2.6</version> <exclusions> <exclusion> <groupId>org.apache.geronimo.specs</groupId> <artifactId>geronimo-javamail_1.4_spec</artifactId> </exclusion> </exclusions> </dependency> Thanks for all of the answers.

    Read the article

  • What is best practice as far as using perl-isms (idiomatic expressions) in Perl?

    - by DVK
    A couple of years back I participated in writing the best practices/coding style for our (fairly large and often Perl-using) company. It was done by a committee of "senior" Perl developers. As anything done by consensus, it had parts which everyone disagreed with. Duh. The part that rubbed wrong the most was a strong recommendation to NOT use many Perlisms (loosely defined as code idioms not present in, say C++ or Java), such as "Avoid using '... unless X;' constructs". The main rationale posited for such rules as this one was that non-Perl developers would have much harder time with the Perl code base otherwise. The assumption here I guess is that Perl code jockeys are rarer breed overall - and among new hires to the company - than non-Perlers. I was wondering whether SO has any good arguments to support or reject this logic... it is mostly academic curiosity at this point as the company's Perl coding standard is ossified and will never be revised again as far as I'm aware. P.S. Just to be clear, the question is in the context I noted - the answer for an all-Perl smaller development shop is obviously a resounding "use Perl to its maximum capability".

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • Unity doesn't work

    - by Budda
    Yesterday I've implemented the code: CustomerProductManager productsManager = container.Resolve<CustomerProductManager>(); It was compilable and working. Today (probably I've modified something I am constantly getting the error: The non-generic method 'Microsoft.Practices.Unity.IUnityContainer.Resolve(System.Type, string, params Microsoft.Practices.Unity.ResolverOverride[])' cannot be used with type arguments My collegue has the same source code and doesn't have same error. Why? How to resolve the problem? P.S. line "using Microsoft.Practices.Unity;" is present in usings section. I've tried to replace generic version with non-generic one: CustomerProductManager productsManager = (CustomerProductManager)container.Resolve(typeof(CustomerProductManager)); And got another error: No overload for method 'Resolve' takes '1' arguments It seems like one of the assemblies is not referenced.. but which one? I have 2 of them referenced: 1. Microsoft.Practices.Unity.dll 2. Microsoft.Practices.ServiceLocation.dll P.P.S. I've saw similar problem http://unity.codeplex.com/WorkItem/View.aspx?WorkItemId=8205 but it is resolved as "not a bug" Any thought will be helpful

    Read the article

  • Problem with cascade delete using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? If I change this value to Cascade everything works properly, but I would rather not rely on this manual change because what if I refresh my model from the DB and it looses that. Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • how to read user input from custom dialog in android?

    - by urobo
    I'd like to use a custom dialog built over an AlterDialog to obtain login info from the user. In this manner I first use the layoutinflater to get the layout and then put it in the AlertDialog.Builder.setView() method. LayoutInflater inflater = (LayoutInflater) Home.this.getSystemService(LAYOUT_INFLATER_SERVICE); layoutLogin = inflater.inflate(R.layout.login,(ViewGroup) findViewById(R.id.rl)); My layout consists of two textview and two editext for username and password respectively. Then I override the onCreateDialog method, checking the dialog id and putting all together, during the building phase I use the setButton(...) method to add a confirmation Button, neutral though: /* (non-Javadoc) * @see android.app.Activity#onCreateDialog(int) */ @Override protected Dialog onCreateDialog(int id) { AlertDialog d = null; AlertDialog.Builder builder = new AlertDialog.Builder(this); switch(id){ ... case Home.DIALOG_LOGIN: builder.setView(layoutLogin); builder.setMessage("Sign in to your DyCaPo Account").setCancelable(false); d=builder.create(); d.setTitle("Login"); Message msg = new Message(); msg.setTarget(Home.this.handleLogin); d.setButton(Dialog.BUTTON_NEUTRAL,"Sign in",msg); break; ... } return d; } Then I setup the Handler handleLogin: private Handler handleLogin= new Handler(){ /* (non-Javadoc) * @see android.os.Handler#handleMessage(android.os.Message) */ @Override public void handleMessage(Message msg) { String input = usernameInput.getText().toString(); //this should hold the EditText field for the username } }; which is just a stub up to now. what I don't get is when and where I have to access the two fields since I tried to save a reference to them but unfortunately I always get a null pointer exception. Can anyone tell me what I do wrong and give some guidelines to work with custom dialogs. Thanks in advance! :)

    Read the article

  • IIS 7.5, ASP.NET, impersonation, and access to C:\Windows\Temp

    - by Heinzi
    Summary: One of our web applications requires write access to C:\Windows\Temp. However, no matter how much I weaken the NTFS permission, procmon shows ACCESS DENIED. Background (which might or might not be relevant for the problem): We are using OLEDB to access an MS Access database (which is located outside of C:\Windows\Temp). Unfortunately, this OLEDB driver requires write access to the user profile's TEMP directory (which happens to be C:\Windows\Temp when running under IIS 7.5), otherwise the dreaded "Unspecified Error" OleDbException is thrown. See KB 926939 for details. I followed the steps in the KB article, but it doesn't help. Details: This is the output of icacls C:\Windows\Temp. For debugging purposes I gave full permissions to Everyone. C:\Windows\Temp NT AUTHORITY\SYSTEM:(OI)(CI)(F) CREATOR OWNER:(OI)(CI)(IO)(F) BUILTIN\IIS_IUSRS:(OI)(CI)(S,RD) BUILTIN\Users:(CI)(S,WD,AD,X) BUILTIN\Administrators:(OI)(CI)(F) Everyone:(OI)(CI)(F) However, this is the screenshot of procmon: Desired Access: Generic Read/Write, Delete Disposition: Create Options: Synchronous IO Non-Alert, Non-Directory File, Random Access, Delete On Close, Open No Recall Attributes: NT ShareMode: None AllocationSize: 0 Impersonating: MYDOMAIN\myuser

    Read the article

  • "cloud architecture" concepts in a system architecture diagrams

    - by markus
    If you design a distributed application for easy scale-out, or you just want to make use of any of the new “cloud computing” offerings by Amazon, Google or Microsoft, there are some typical concepts or components you usually end up using: distributed blob storage (aka S3) asynchronous, durable message queues (aka SQS) non-Relational-/non-transactional databases (like SimpleDB, Google BigTable, Azure SQL Services) distributed background worker pool load-balanced, edge-service processes handling user requests (often virtualized) distributed caches (like memcached) CDN (content delivery network like Akamai) Now when it comes to design and sketch an architecture that makes use of such patterns, are there any commonly used symbols I could use? Or even a download with some cool Visio stencils? :) It doesn’t have to be a formal system like UML but I think it would be great if there were symbols that everyone knows and understands, like we have commonly used shapes for databases or a documents, for example. I think it would be important to not mix it up with traditional concepts like a normal file system (local or network server/SAN), or a relational database. Simply speaking, I want to be able to draw some conclusions about an application’s scalability or data consistency issues by just looking at the system architecture overview diagram. Update: Thank you very much for your answers. I like the idea of putting a small "cloud symbol" on the traditional symbols. However I leave this thread open just in case someone will find specific symbols (maybe in a book or so) - or uploaded some pimped up Visio stencils ;)

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >