Search Results

Search found 27161 results on 1087 pages for 'information schema'.

Page 149/1087 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • How to add/update lotus notes signature 8.5 from C# using API

    - by faris
    I want build a third party tool to make our employees fill the Contact Form information , and then make him able to add his information as a signature image to the Lotus notes .. so if the customer want to send a new email , the signature will be added automatically to the email .. I use lotus notes V. 8.5 .. But how I can do it ,, is there any API available to make this happens ? I appreciates any help ...

    Read the article

  • C# Windows Live Messenger Connectivity?

    - by Pcstalljr
    Hey there guys, I'm working on an IRC bot project, Trying to integrate Windows live into a bot, And have received messages sent to the channel. But the current problem is that the old messenger API that I had no longer works. And the current API i can only find information about addins (complicated for the end user to set up unless I make an installer), Or contact information. I would like my bot to be stand-alone (no messenger required) and have it log in it self, But I can not find information on the login process anywhere. Any ideas?

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • How can I resolve gstreamer dependencies in Ubuntu

    - by michael
    Hi, Can you please tell me how can I resolve these dependencies on ubuntu: checking for GSTREAMER... configure: error: Package requirements (gstreamer-0.10 >= 0.10 gstreamer-app-0.10 gstreamer-base-0.10 gstreamer-pbutils-0.10 gstreamer-plugins-base-0.10 >= 0.10.25 gstreamer-video-0.10) were not met: No package 'gstreamer-app-0.10' found No package 'gstreamer-pbutils-0.10' found No package 'gstreamer-plugins-base-0.10' found No package 'gstreamer-video-0.10' found I have tried: $ sudo apt-get install *gstreamer-video* Reading package lists... Done Building dependency tree Reading state information... Done E: Regex compilation error - Invalid preceding regular expression $ sudo apt-get install *gstreamer-app* Reading package lists... Done Building dependency tree Reading state information... Done E: Regex compilation error - Invalid preceding regular expression $ sudo apt-get install *gstreamer-base* Reading package lists... Done Building dependency tree Reading state information... Done E: Regex compilation error - Invalid preceding regular expression

    Read the article

  • QT's QGraphicsview clickable icon on screen

    - by goodwince
    I'm working on a project with QT and am trying to draw icons from a database. I have auxiliary information in the table that I would like to display if the user chooses to see it (i.e. the x,y of the icon and some other options from database). I am debating on would it be better to go through and just redraw all the icons with this information added, or do some sort of looping through the icons and setting some value to true to display the information. Thanks in advance.

    Read the article

  • Mixed-mode C++/CLI crashing: heap corruption in atexit (static destructor registration)

    - by thaimin
    I am working on deploying a program and the codebase is a mixture of C++/CLI and C#. The C++/CLI comes in all flavors: native, mixed (/clr), and safe (/clr:safe). In my development environment I create a DLL of all the C++/CLI code and reference that from the C# code (EXE). This method works flawlessly. For my releases that I want to release a single executable (simply stating that "why not just have a DLL and EXE separate?" is not acceptable). So far I have succeeded in compiling the EXE with all the different sources. However, when I run it I get the "XXXX has stopped working" dialog with options to Check online, Close and Debug. The problem details are as follows: Problem Event Name: APPCRASH Fault Module Name: StackHash_8d25 Fault Module Version: 6.1.7600.16559 Fault Module Timestamp: 4ba9b29c Exception Code: c0000374 Exception Offset: 000cdc9b OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: 8d25 Additional Information 2: 8d25552d834e8c143c43cf1d7f83abb8 Additional Information 3: 7450 Additional Information 4: 74509ce510cd821216ce477edd86119c If I debug and send it to Visual Studio, it reports: Unhandled exception at 0x77d2dc9b in XXX.exe: A heap has been corrupted Choosing break results in it stopping at ntdll.dll!77d2dc9b() with no additional information. If I tell Visual Studio to continue, the program starts up fine and seems to work without incident, probably since a debugger is now attached. What do you make of this? How do I avoid this heap corruption? The program seems to work fine except for this. My abridged compilation script is as follows (I have omitted my error checking for brevity): @set TARGET=x86 @set TARGETX=x86 @set OUT=%TARGETX% @call "%VS90COMNTOOLS%\..\..\VC\vcvarsall.bat" %TARGET% @set WIMGAPI=C:\Program Files\Windows AIK\SDKs\WIMGAPI\%TARGET% set CL=/Zi /nologo /W4 /O2 /GS /EHa /MD /MP /D NDEBUG /D _UNICODE /D UNICODE /D INTEGRATED /Fd%OUT%\ /Fo%OUT%\ set INCLUDE=%WIMGAPI%;%INCLUDE% set LINK=/nologo /LTCG /CLRIMAGETYPE:IJW /MANIFEST:NO /MACHINE:%TARGETX% /SUBSYSTEM:WINDOWS,6.0 /OPT:REF /OPT:ICF /DEFAULTLIB:msvcmrt.lib set LIB=%WIMGAPI%;%LIB% set CSC=/nologo /w:4 /d:INTEGRATED /o+ /target:module :: Compiling resources omitted @set CL_NATIVE=/c /FI"stdafx-native.h" @set CL_MIXED=/c /clr /LN /FI"stdafx-mixed.h" @set CL_PURE=/c /clr:safe /LN /GL /FI"stdafx-pure.h" @set NATIVE=... @set MIXED=... @set PURE=... cl %CL_NATIVE% %NATIVE% cl %CL_MIXED% %MIXED% cl %CL_PURE% %PURE% link /LTCG /NOASSEMBLY /DLL /OUT:%OUT%\core.netmodule %OUT%\*.obj csc %CSC% /addmodule:%OUT%\core.netmodule /out:%OUT%\GUI.netmodule /recurse:*.cs link /FIXED /ENTRY:GUI.Program.Main /OUT:%OUT%\XXX.exe ^ /ASSEMBLYRESOURCE:%OUT%\core.resources,XXX.resources,PRIVATE /ASSEMBLYRESOURCE:%OUT%\GUI.resources,GUI.resources,PRIVATE ^ /ASSEMBLYMODULE:%OUT%\core.netmodule %OUT%\gui.res %OUT%\*.obj %OUT%\GUI.netmodule Update 1 Upon compiling this with debug symbols and trying again, I do in fact get more information. The call stack is: msvcr90d.dll!_msize_dbg(void * pUserData, int nBlockUse) Line 1511 + 0x30 bytes msvcr90d.dll!_dllonexit_nolock(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 295 + 0xd bytes msvcr90d.dll!__dllonexit(int (void)* func, void (void)* * * pbegin, void (void)* * * pend) Line 273 + 0x11 bytes XXX.exe!_onexit(int (void)* func) Line 110 + 0x1b bytes XXX.exe!atexit(void (void)* func) Line 127 + 0x9 bytes XXX.exe!`dynamic initializer for 'Bytes::Null''() Line 7 + 0xa bytes mscorwks.dll!6cbd1b5c() [Frames below may be incorrect and/or missing, no symbols loaded for mscorwks.dll] ... The line of my code that 'causes' this (dynamic initializer for Bytes::Null) is: Bytes Bytes::Null; In the header that is declared as: class Bytes { public: static Bytes Null; } I also tried doing a global extern in the header like so: extern Bytes Null; // header Bytes Null; // cpp file Which failed in the same way. It seems that the CRT atexit function is responsible, being inadvertently required due to the static initializer. Fix As Ben Voigt pointed out the use of any CRT functions (including native static initializers) requires proper initialization of the CRT (which happens in mainCRTStartup, WinMainCRTStartup, or _DllMainCRTStartup). I have added a mixed C++/CLI file that has a C++ main or WinMain: using namespace System; [STAThread] // required if using an STA COM objects (such as drag-n-drop or file dialogs) int main() { // or "int __stdcall WinMain(void*, void*, wchar_t**, int)" for GUI applications array<String^> ^args_orig = Environment::GetCommandLineArgs(); int l = args_orig->Length - 1; // required to remove first argument (program name) array<String^> ^args = gcnew array<String^>(l); if (l > 0) Array::Copy(args_orig, 1, args, 0, l); return XXX::CUI::Program::Main(args); // return XXX::GUI::Program::Main(args); } After doing this, the program now gets a little further, but still has issues (which will be addressed elsewhere): When the program is solely in C# it works fine, along with whenever it is just calling C++/CLI methods, getting C++/CLI properties, and creating managed C++/CLI objects Events added by C# into the C++/CLI code never fire (even though they should) One other weird error is that an exception happens is a InvalidCastException saying can't cast from X to X (where X is the same as X...) However since the heap corruption is fixed (by getting the CRT initialized) the question is done.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I share a WiX fragment in two WiX projects?

    - by Randy Eppinger
    We have a WiX fragment in a file SomeDialog.wxs that prompts the user for some information. It's referenced in another fragment in InstallerUI.wxs file that controls the dialog order. Of course, Product.wxs is our main file. Works great. Now I have a second Visual Studio 2008 Wix 3.0 Project for the .MSI of another application and it needs to ask the user for the same information. I can't seem to figure out the best way to share the file so that changing the information requested will result in both .MSIs getting the new behavior. I honestly can't tell if a merge module, an .wsi (include) or a .wixlib is the right solution. I would have hoped to find a simple example of someone doing this but I have failed thus far. Edit: Based on Rob Mensching's wixlib blog entry, a wixlib may be the answer, but I am still searching for an example of how to do this.

    Read the article

  • jQuery Templates - XHTML Validation

    - by hajan
    Many developers have already asked me about this. How to make XHTML valid the web page which uses jQuery Templates. Maybe you have already tried, and I don't know what are your results but here is my opinion regarding this. By default, Visual Studio.NET adds the xhtml1-transitional.dtd schema <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> So, if you try to validate your page which has jQuery Templates against this schema, your page won't be XHTML valid. Why? It's because when creating templates, we use HTML tags inside <script> ... </script> block. Yes, I know that the script block has type="text/html" but it's not supported in this schema, thus it's not valid. Let's try validate the following code Code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head>     <title>jQuery Templates :: XHTML Validation</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>          <script language="javascript" type="text/javascript">         $(function () {             var attendees = [                 { Name: "Hajan", Surname: "Selmani", speaker: true, phones: [070555555, 071888999, 071222333] },                 { Name: "Denis", Surname: "Manski", phones: [070555555, 071222333] }             ];             $("#myTemplate").tmpl(attendees).appendTo("#attendeesList");         });     </script>     <script id="myTemplate" type="text/html">          <li>             ${Name} ${Surname}             {{if speaker}}                 (<font color="red">speaks</font>)             {{else}}                 (attendee)             {{/if}}         </li>     </script>      </head>     <body>     <ol id="attendeesList"></ol> </body> </html> To validate it, go to http://validator.w3.org/#validate_by_input and copy paste the code rendered on client-side browser (it’s almost the same, only the template is rendered inside OL so LI tags are created for each item). Press CHECK and you will get: Result: 1 Errors, 2 warning(s)  The error message says: Validation Output: 1 Error Line 21, Column 13: document type does not allow element "li" here <li> Yes, the <li> HTML element is not allowed inside the <script>, so how to make it valid? FIRST: Using <![CDATA][…]]> The first thing that came in my mind was the CDATA. So, by wrapping any HTML tag which is in script blog, inside <![CDATA[ ........ ]]> it will make our code valid. However, the problem is that the template won't render since the template tags {} cannot get evaluated if they are inside CDATA. Ok, lets try with another approach. SECOND: HTML5 validation Well, if we just remove the strikethrough part bellow of the !DOPCTYPE <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> our template is going to be checked as HTML5 and will be valid. Ok, there is another approach I've also tried: THIRD: Separate template to an external file We can separate the template to external file. I didn’t show how to do this previously, so here is the example. 1. Add HTML file with name Template.html in your ASPX website. 2. Place your defined template there without <script> tag Content inside Template.html <li>     ${Name} ${Surname}     {{if speaker}}         (<font color="red">speaks</font>)     {{else}}         (attendee)     {{/if}} </li> 3. Call the HTML file using $.get() jQuery ajax method and render the template with data using $.tmpl() function. $.get("/Templates/Template.html", function (template) {     $.tmpl(template, attendees).appendTo("#attendeesList"); }); So the complete code is: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head>     <title>jQuery Templates :: XHTML Validation</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.js" type="text/javascript"></script>          <script language="javascript" type="text/javascript">         $(function () {             var attendees = [                 { Name: "Hajan", Surname: "Selmani", speaker: true, phones: [070555555, 071888999, 071222333] },                 { Name: "Denis", Surname: "Manski", phones: [070555555, 071222333] }             ];             $.get("/Templates/Template.html", function (template) {                 $.tmpl(template, attendees).appendTo("#attendeesList");             });         });     </script>      </head>     <body>     <ol id="attendeesList"></ol> </body> </html> This document was successfully checked as XHTML 1.0 Transitional! Result: Passed If you have any additional methods for XHTML validation, you can share it :). Thanks,Hajan

    Read the article

  • Quick question on session security.

    - by Scarface
    Hey guys, I was scanning my site for security and I noticed that it was possible for non users to send requests and post information, so I decided to put login checks on all information posts. I was wondering if it was a good way to keep a session id that is created by md5(uniqid()); in a session variable $_SESSION['id']=md5(uniqid()); for each user and then store that in a database under active users for that user. Then when a user tries to insert information, verify that their $_SESSION['id'] variable is equal to the one in the database where the username equals their $_SESSION['username']. What are your ideas on this guys? Thanks in advance!

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • payment order option for payment in oscommerece

    - by manish
    Hello sir... I need PAYMENY ORDER for making payment.when customer choose payment with payment order, then he have to fill a form and submit .For oscommerece i need thid. Means,When the user reaches the page where payment information is requested we will have a option to pay using Purchase Order in the dropdown and credit card information will not be asked in this case. They will need to fill out all the information like this page https://creator.zoho.com/dmkosanke/form/7.Once this is filled an email alert will be sent and this application will show up under admin and they can check and approve it.

    Read the article

  • WCF Service worker thread communicate with ServiceHost thread

    - by Brent
    I have a windows NT Service that opens a ServiceHost object. The service host context is per-session so for each client a new worker thread is created. What I am trying to do is have each worker thread make calls to the thread that started the service host. The NT Service needs to open a VPN connection and poll information from a device on the remote network. The information is stored in a SQL database for the worker threads to read. I only want to poll the device if there is a client connected, which will reduce network trafic. I would like the worker threads to tell the service host thread that they are requesting information and start the polling and updating the database. Everything is working if the device is alway being polled and the database being updated.

    Read the article

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • JAX-WS Consuming web service with WS-Security and WS-Addressing

    - by aurealus
    I'm trying to develop a standalone Java web service client with JAX-WS (Metro) that uses WS-Security with Username Token Authentication (Password digest, nonces and timestamp) and timestamp verification along with WS-Addressing over SSL. The WSDL I have to work with does not define any security policy information. I have been unable to figure out exactly how to add this header information (the correct way to do so) when the WSDL does not contain this information. Most examples I have found using Metro revolve around using Netbeans to automatically generate this from the WSDL which does not help me at all. I have looked into WSIT, XWSS, etc. without much clarity or direction. JBoss WS Metro looked promising not much luck yet there either. Anyone have experience doing this or have suggestions on how to accomplish this task? Even pointing me in the right direction would be helpful. I am not restricted to a specific technology other than it must be Java based.

    Read the article

  • Storing user info in Session using an Object vs. normal variables

    - by justinl
    I'm in the process of implementing a user authentication system for my website. I'm using an open source library that maintains user information by creating a User object and storing that object inside my php SESSION variable. Is this the best way to store and access that information? I find it a bit of a hassle to access the user variables because I have to create an object to access them first: $userObj = $_SESSION['userObject']; $userObj->userId; instead of just accessing the user id like this how I would usually store the user ID: $_SESSION['userId']; Is there an advantage to storing a bunch of user data as an object instead of just storing them as individual SESSION variables? ps - The library also seems to store a handful of variables inside the user object (id, username, date joined, email, last user db query) but I really don't care to have all that information stored in my session. I only really want to keep the user id and username.

    Read the article

  • Database per application VS One big database for all applications

    - by Jorge Vargas
    Hello, I'm designing a few applications that will share 2 or 3 database tables and all of the other tables will be independent of each app. The shared databases contain mostly user information, and there might occur the case where other tables need to be shared, but that's my instinct speaking. I'm leaning over the one database for all applications solution because I want to have referential integrity, and I won't have to keep the same information up to date in each of the databases, but I'm probably going to end with a database of 100+ tables where only groups of ten tables will have related information. The database per application approach helps me keep everything more organized, but I don't know a way to keep the related tables in all databases up to date. So, the basic question is: which of both approaches do you recommend? Thanks, Jorge Vargas.

    Read the article

  • Google I/O 2011: Querying Freebase: Get More From MQL

    Google I/O 2011: Querying Freebase: Get More From MQL Jamie Taylor Freebase's query language, MQL, lets you access data about more than 20 million curated entities and the connections between them. Level up your Freebase query skills with advanced syntax, optimisation tricks, schema introsopection, metaschema, and more. From: GoogleDevelopers Views: 2007 15 ratings Time: 46:49 More in Science & Technology

    Read the article

  • Problems with updates

    - by legospace9876
    I can not update Weather Indicator with Update Manager. This is the terminal log: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "sr_RS.utf_8_latin" are supported and installed on your system. perl: warning: Falling back to the andard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `/usr/share/indicator-weather/indicator-weather.gschema.xml': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: indicator-weather Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up indicator-weather (11.11.28-0ubuntu1.1) ... Installing indicator-specific icons... Installing indicator dconf schema... cp: cannot stat `**/usr/share/indicator-weather/indicator-weather.gschema.xml**': No such file or directory dpkg: error processing indicator-weather (--configure): subprocess installed post-installation script returned error exit status 1 The file that I bold really does not exist. How can I solve this problem?

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Creating a separate excel using Macro

    - by shayam
    Hi, I am having a excel with one column that has got information regarding tender. Each cell will have a value like Column: Nokia([Mode1.Number],OLD) Column: Motorola([Mode1.Number],OLD) Column: Motorola([Mode2.Number],NEW) Column: Motorola([Mode3.Number],OLD) Column: Samsung([Mode2.Number],NEW) I need to create 2 excel out of this. One should 've all the information of the OLD and the second excel should've all the information of NEW. So my output excel should contain First Excel Nokia([Model1.Number]) Motorola([Mode1.Number]) Motorola([Mode3.Number]) Second Excel Motorola([Mode2.Number]) Samsung([Mode2.Number]) Kindly help me.. Thanks in advance..

    Read the article

  • Which is the best html tidy pack ? Is there any option in html agility pack to make html webpage tid

    - by Harikrishna
    I am using html agility pack to parse html tabular information. Now there is some html content with missing ending tags and from such page because of missing ending tags html agility pack does not parse information properly.So I want to insert ending tags where there are missing ending tags so html agility pack parse information properly. So to insert the missing ending tags what should I do ?Should I do write my own code for that or use html tidy pack to do that ? If html tidy pack then which is the best html tidy pack,and how to use it any example if possible ? And if my own code than what it can be like ? Is there any option in html agility pack which can make us able to first make the html page tidy and then parse the webpage.

    Read the article

  • Which is the best HTML tidy pack? Is there any option in HTML agility pack to make HTML webpage tidy

    - by Harikrishna
    I am using html agility pack to parse html tabular information. Now there is some html content with missing ending tags and from such page because of missing ending tags html agility pack does not parse information properly.So I want to insert ending tags where there are missing ending tags so html agility pack parse information properly. So to insert the missing ending tags what should I do ?Should I do write my own code for that or use html tidy pack to do that ? If html tidy pack then which is the best html tidy pack,and how to use it any example if possible ? And if my own code than what it can be like ? Is there any option in html agility pack which can make us able to first make the html page tidy and then parse the webpage.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >