Search Results

Search found 17579 results on 704 pages for 'mercurial build'.

Page 468/704 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Calling SSIS from BizTalk Orchestration

    - by aceinthehole
    I have a scenario were I need to move a vast amount of data, and I need to use BizTalk to control the flow and contain the business logic. The problem is that BizTalk will not be able to handle the amount of data that needs to be moved. We have decided to a BizTalk Orchestration to kick off an SSIS package that does the actual heavy lifting. However, there is a caveat in that we have to be able to pass information into SSIS such as file location and info about how to split certain data up. My question is, what is the best way to call into SSIS from an Orchestration given those parameters? Should I build a webservice around it? Is there an adapter or stored procedure that I can call? Or is there a way to call it directly from the Orchestration?

    Read the article

  • windows mobile 6.5 Gestures and DirectDraw

    - by ArjanW
    I'm trying to build a UI using directdraw in c#. For this im using a DirectDrawWrapper as sugested here. My initial tests setting up the screen work perfectly. But now i'd like to incorporate gesture recognition into the UI. So i instantiate a GestureRecognizer and tie it to the _form which also gets passed to the DirectDrawGraphics constructor, form = new Form(); _form.show(); _graphics = new DirectDrawGraphics(_form, CooperativeFlags.Fullscreen, BackbufferMode.Any); gestureRecognizer = new GestureRecognizer(); gestureRecognizer.TargetControl = _form; Pasting the whole DirectDrawWrapper code might be a bit to much, so let me try to formulate a question. I guess directdraw talks directly to the video memory, as it should. But then my form wont receive any messages, thus any eventhandlers i'v tied op to the GestureRecognizer wont be fired. How can i still receive any messages from the touchscreen?

    Read the article

  • iPad application submission with iPhone SDK beta 5 rejected

    - by FredM
    I try to send an specific iPad Application to iTunes connect before March 27 and as Apple says: "Only iPad apps compiled with iPhone SDK 3.2 beta 5 will be accepted for this initial review." So I compiled my application with iPhone SDK 3.2 beta 5 with a distribution provisioning profile. But when I upload my application on iTunes Connect, I have the following error: "The binary you uploaded was invalid. A pre-release beta version of the SDK was used to build the application" For sure, it's the beta 5 ! Have you got an idea? Thank you in advance. Fred

    Read the article

  • Zend Framework: How to include an OR statement in an SQL fetchAll()

    - by Scoobler
    I am trying to build the following SQL statement: SELECT users_table.*, users_data.first_name, users_data.last_name FROM users_table INNER JOIN users_data ON users_table.id = user_id WHERE (users_table.username LIKE '%sc%') OR (users_data.first_name LIKE '%sc%') OR (users_data.last_name LIKE '%sc%') I have the following code at the moment: public function findAllUsersLike($like) { $select = $this-select(Zend_Db_Table::SELECT_WITH_FROM_PART)-setIntegrityCheck(false); $select-where('users_table.username LIKE ?', '%'.$like.'%'); $select-where('users_data.first_name LIKE ?', '%'.$like.'%'); $select-where('users_data.last_name LIKE ?', '%'.$like.'%'); $select-join('users_data', 'users_table.id = user_id', array('first_name', 'last_name')); return $this-fetchAll($select); } This is close, but not right as it uses AND to add the extra WHERE statements, instead of OR. Is there any way to do this as one select? Or should I perform 3 selects and combine the results (alot more overhead?)? P.S. The parameter $like that is past is sanitized so don't need to wory about user input in the code above!

    Read the article

  • Trying to compile VS2008 project on Win 64 bit which is custom Powershell PSSnapin

    - by Boris Kleynbok
    Library Project compiles fine for ANY CPU in VS2008 running on Win 7 64 -bit. Now in the post build following command fails when attemptiong to register library dll: PS C:\Windows\Microsoft.NET\Framework64\v2.0.50727 .\installutil C:\path\Project.dll Exception occurred while initializing the installation: System.BadImageFormatException: Could not load file or assembly 'file:///C:\path\Project.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format.. Do I need to compile the project as x64 I was under impression that AnyCPU will take care of it. Alo my library does have dependencies. Do they also need to be compiled as x64 bit? Any help is appreciated.

    Read the article

  • Is it safe to override `release` for debugging?

    - by Koning Baard XIV
    Sometimes I need to find out if an object will really be released. I could use Instruments of course, but that takes much time, and I have to search into millions of objects, so I used to do this: -(void)release { NSLog("I'm released"); [super release]; } But the problem is: is this safe to do? Can I get any problems when I override -(void)release. Also, is it really void? And what if I build my application for distribution, but per accident leave it there? Or is it just safe? Thanks

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Best Open Source Java CMS

    - by LuRsT
    I'm trying to find a good Java cms, I've stumbled uppon some that are quite good like: Apache Lenya, dotCMS, Info Glue, Open Edit, MMBase, Contelligent, Hippo CMS Which on do you guys recommend, or even one that I'm missing, because I have some more that I am studying at the moment. The requirements are that I can build modules for it with ease, and that it is open source and free, and with LDAP support. The problem is that I'm not that into Java in web, that's why I'm having trouble finding a good one. One Java cms like dotNetNuke would be the best. Edit: Jahia is off the list because it has no suport for LDAP (community version) Thanks!

    Read the article

  • DevConnections Session Slides, Samples and Links

    - by Rick Strahl
    Finally coming up for air this week, after catching up with being on the road for the better part of three weeks. Here are my slides, samples and links for my four DevConnections Session two weeks ago in Vegas. I ended up doing one extra un-prepared for session on WebAPI and AJAX, as some of the speakers were either delayed or unable to make it at all to Vegas due to Sandy's mayhem. It was pretty hectic in the speaker room as Erik (our event coordinator extrodinaire) was scrambling to fill session slots with speakers :-). Surprisingly it didn't feel like the storm affected attendance drastically though, but I guess it's hard to tell without actual numbers. The conference was a lot of fun - it's been a while since I've been speaking at one of these larger conferences. I'd been taking a hiatus, and I forgot how much I enjoy actually giving talks. Preparing - well not  quite so much, especially since I ended up essentially preparing or completely rewriting for all three of these talks and I was stressing out a bit as I was sick the week before the conference and didn't get as much time to prepare as I wanted to. But - as always seems to be the case - it all worked out, but I guess those that attended have to be the judge of that… It was great to catch up with my speaker friends as well - man I feel out of touch. I got to spend a bunch of time with Dan Wahlin, Ward Bell, Julie Lerman and for about 10 minutes even got to catch up with the ever so busy Michele Bustamante. Lots of great technical discussions including a fun and heated REST controversy with Ward and Howard Dierking. There were also a number of great discussions with attendees, describing how they're using the technologies touched in my talks in live applications. I got some great ideas from some of these and I wish there would have been more opportunities for these kinds of discussions. One thing I miss at these Vegas events though is some sort of coherent event where attendees and speakers get to mingle. These Vegas conferences are just like "go to sessions, then go out and PARTY on the town" - it's Vegas after all! But I think that it's always nice to have at least one evening event where everybody gets to hang out together and trade stories and geek talk. Overall there didn't seem to be much opportunity for that beyond lunch or the small and short exhibit hall events which it seemed not many people actually went to. Anyways, a good time was had. I hope those of you that came to my sessions learned something useful. There were lots of great questions and discussions after the sessions - always appreciate hearing the real life scenarios that people deal with in relation to the abstracted scenarios in sessions. Here are the Session abstracts, a few comments and the links for downloading slides and  samples. It's not quite like being there, but I hope this stuff turns out to be useful to some of you. I'll be following up a couple of these sessions with white papers in the following weeks. Enjoy. ASP.NET Architecture: How ASP.NET Works at the Low Level Abstract:Interested in how ASP.NET works at a low level? ASP.NET is extremely powerful and flexible technology, but it's easy to forget about the core framework that underlies the higher level technologies like ASP.NET MVC, WebForms, WebPages, Web Services that we deal with on a day to day basis. The ASP.NET core drives all the higher level handlers and frameworks layered on top of it and with the core power comes some complexity in the form of a very rich object model that controls the flow of a request through the ASP.NET pipeline from Windows HTTP services down to the application level. To take full advantage of it, it helps to understand the underlying architecture and model. This session discusses the architecture of ASP.NET along with a number of useful tidbits that you can use for building and debugging your ASP.NET applications more efficiently. We look at overall architecture, how requests flow from the IIS (7 and later) Web Server to the ASP.NET runtime into HTTP handlers, modules and filters and finally into high-level handlers like MVC, Web Forms or Web API. Focus of this session is on the low-level aspects on the ASP.NET runtime, with examples that demonstrate the bootstrapping of ASP.NET, threading models, how Application Domains are used, startup bootstrapping, how configuration files are applied and how all of this relates to the applications you write either using low-level tools like HTTP handlers and modules or high-level pages or services sitting at the top of the ASP.NET runtime processing chain. Comments:I was surprised to see so many people show up for this session - especially since it was the last session on the last day and a short 1 hour session to boot. The room was packed and it was to see so many people interested the abstracts of architecture of ASP.NET beyond the immediate high level application needs. Lots of great questions in this talk as well - I only wish this session would have been the full hour 15 minutes as we just a little short of getting through the main material (didn't make it to Filters and Error handling). I haven't done this session in a long time and I had to pretty much re-figure all the system internals having to do with the ASP.NET bootstrapping in light for the changes that came with IIS 7 and later. The last time I did this talk was with IIS6, I guess it's been a while. I love doing this session, mainly because in my mind the core of ASP.NET overall is so cleanly designed to provide maximum flexibility without compromising performance that has clearly stood the test of time in the 10 years or so that .NET has been around. While there are a lot of moving parts, the technology is easy to manage once you understand the core components and the core model hasn't changed much even while the underlying architecture that drives has been almost completely revamped especially with the introduction of IIS 7 and later. Download Samples and Slides   Introduction to using jQuery with ASP.NET Abstract:In this session you'll learn how to take advantage of jQuery in your ASP.NET applications. Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors for document element selection, manipulating these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The second half of the session then delves into jQuery's AJAX features and several different ways how you can interact with ASP.NET on the server. You'll see examples of using ASP.NET MVC for serving HTML and JSON AJAX content, as well as using the new ASP.NET Web API to serve JSON and hypermedia content. You'll also see examples of client side templating/databinding with Handlebars and Knockout. Comments:This session was in a monster of a room and to my surprise it was nearly packed, given that this was a 100 level session. I can see that it's a good idea to continue to do intro sessions to jQuery as there appeared to be quite a number of folks who had not worked much with jQuery yet and who most likely could greatly benefit from using it. Seemed seemed to me the session got more than a few people excited to going if they hadn't yet :-).  Anyway I just love doing this session because it's mostly live coding and highly interactive - not many sessions that I can build things up from scratch and iterate on in an hour. jQuery makes that easy though. Resources: Slides and Code Samples Introduction to jQuery White Paper Introduction to ASP.NET Web API   Hosting the Razor Scripting Engine in Your Own Applications Abstract:The Razor Engine used in ASP.NET MVC and ASP.NET Web Pages is a free-standing scripting engine that can be disassociated from these Web-specific implementations and can be used in your own applications. Razor allows for a powerful mix of code and text rendering that makes it a wonderful tool for any sort of text generation, from creating HTML output in non-Web applications, to rendering mail merge-like functionality, to code generation for developer tools and even as a plug-in scripting engine. In this session, we'll look at the components that make up the Razor engine and how you can bootstrap it in your own applications to hook up templating. You'll find out how to create custom templates and manage Razor requests that can be pre-compiled, detecting page changes and act in ways similar to a full runtime. We look at ways that you can pass data into the engine and retrieve both the rendered output as well as result values in a package that makes it easy to plug Razor into your own applications. Comments:That this session was picked was a bit of a surprise to me, since it's a bit of a niche topic. Even more of a surprise was that during the session quite a few people who attended had actually used Razor externally and were there to find out more about how the process works and how to extend it. In the session I talk a bit about a custom Razor hosting implementation (Westwind.RazorHosting) and drilled into the various components required to build a custom Razor Hosting engine and a runtime around it. This sessions was a bit of a chore to prepare for as there are lots of technical implementation details that needed to be dealt with and squeezing that into an hour 15 is a bit tight (and that aren't addressed even by some of the wrapper libraries that exist). Found out though that there's quite a bit of interest in using a templating engine outside of web applications, or often side by side with the HTML output generated by frameworks like MVC or WebForms. An extra fun part of this session was that this was my first session and when I went to set up I realized I forgot my mini-DVI to VGA adapter cable to plug into the projector in my room - 6 minutes before the session was about to start. So I ended up sprinting the half a mile + back to my room - and back at a full sprint. I managed to be back only a couple of minutes late, but when I started I was out of breath for the first 10 minutes or so, while trying to talk. Musta sounded a bit funny as I was trying to not gasp too much :-) Resources: Slides and Code Samples Westwind.RazorHosting GitHub Project Original RazorHosting Blog Post   Introduction to ASP.NET Web API for AJAX Applications Abstract:WebAPI provides a new framework for creating REST based APIs, but it can also act as a backend to typical AJAX operations. This session covers the core features of Web API as it relates to typical AJAX application development. We’ll cover content-negotiation, routing and a variety of output generation options as well as managing data updates from the client in the context of a small Single Page Application style Web app. Finally we’ll look at some of the extensibility features in WebAPI to customize and extend Web API in a number and useful useful ways. Comments:This session was a fill in for session slots not filled due MIA speakers stranded by Sandy. I had samples from my previous Web API article so decided to go ahead and put together a session from it. Given that I spent only a couple of hours preparing and putting slides together I was glad it turned out as it did - kind of just ran itself by way of the examples I guess as well as nice audience interactions and questions. Lots of interest - and also some confusion about when Web API makes sense. Both this session and the jQuery session ended up getting a ton of questions about when to use Web API vs. MVC, whether it would make sense to switch to Web API for all AJAX backend work etc. In my opinion there's no need to jump to Web API for existing applications that already have a good AJAX foundation. Web API is awesome for real externally consumed APIs and clearly defined application AJAX APIs. For typical application level AJAX calls, it's still a good idea, but ASP.NET MVC can serve most if not all of that functionality just as well. There's no need to abandon MVC (or even ASP.NET AJAX or third party AJAX backends) just to move to Web API. For new projects Web API probably makes good sense for isolation of AJAX calls, but it really depends on how the application is set up. In some cases sharing business logic between the HTML and AJAX interfaces with a single MVC API can be cleaner than creating two completely separate code paths to serve essentially the same business logic. Resources: Slides and Code Samples Sample Code on GitHub Introduction to ASP.NET Web API White Paper© Rick Strahl, West Wind Technologies, 2005-2012Posted in Conferences  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • m2eclipse resource filtering

    - by drewzilla
    I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it. So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied. I did read somewhere that: "m2eclipse skips filtering if there were no resource changes during incremental build" I'm using m2eclipse 0.10.x Has anyone else come across this? Thanks, Andrew

    Read the article

  • Extend RedCloth via Redmine plugin?

    - by FLX
    Hello, I'm new to Ruby/Redmine/Redcloth but I'm trying to achieve the following: The default way to build a link in Textile is "foo":http://bar. However, 90% of the day I use Atlassian products, which use [foo|http://bar] as link markup. To keep everything a bit uniform I'd like to implement this in Redmine via a plugin. However, it appears that you can't change the macro syntax so instead I'll have to look into extending RedCloth to accept this form of inserting links. Does anyone know how I can achieve this? Thank you and merry christmas, Dennis

    Read the article

  • CodePlex Daily Summary for Monday, May 21, 2012

    CodePlex Daily Summary for Monday, May 21, 2012Popular ReleasesMetadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.0.0): New UI Metro style New features Save and load settings to/from file Export only OptionSet attributes Use of Gembox Spreadsheet to generate Excel (makes application lighter : 1,5MB instead of 7MB)Audio Pitch & Shift: Audio Pitch And Shift 4.2.0: Backward / Forward buttons Improved features for encoding, streaming, menu Bug fixesState Machine .netmf: State Machine Example: First release.... Contains 3 state machines running on separate threads. Event driven button to change the states. StateMachineEngine to support the Machines Message class with the type of data to send between statesSilverlight socket component: Smark.NetDisk: Smark.NetDisk?????Silverlight ?.net???????????,???????????????????????。Smark.NetDisk??????????,????.net???????????????????????tcp??;???????Silverlight??????????????????????ZXMAK2: Version 2.6.1.9: added WAV serializer for tape devicecallisto: callisto 2.0.28: Update log: - Extended Scribble protocol. - Updated HTML5 client code - now supports the latest versions of Google Chrome.ExtAspNet: ExtAspNet v3.1.6: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://bbs.extasp.net/ ??:http://demo.extasp.net/ ??:http://doc.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-20 v3.1.6 -??RowD...Dynamics XRM Tools: Dynamics XRM Tools BETA 1.0: The Dynamics XRM Tools 1.0 BETA is now available Seperate downloads are available for On Premise and Online as certain features are only available On Premise. This is a BETA build and may not resemble the final release. Many enhancements are in development and will be made available soon. Please provide feedback so that we may learn and discover how to make these tools better.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.05: Whats New Added New Editor Skin "BootstrapCK-Skin" Added New Editor Skin "Slick" Added Dnn Pages Drop Down to the Link Dialog (to quickly link to a portal tab) changes Fixed Issue #6956 Localization issue with some languages Fixed Issue #6930 Folder Tree view was not working in some cases Changed the user folder from User name to User id User Folder is now used when using Upload Function and User Folder is enabled File-Browser Fixed Resizer Preview Image Optimized the oEmbed Pl...PHPExcel: PHPExcel 1.7.7: See Change Log for details of the new features and bugfixes included in this release. BREAKING CHANGE! From PHPExcel 1.7.8 onwards, the 3rd-party tcPDF library will no longer be bundled with PHPExcel for rendering PDF files through the PDF Writer. The PDF Writer is being rewritten to allow a choice of 3rd party PDF libraries (tcPDF, mPDF, and domPDF initially), none of which will be bundled with PHPExcel, but which can be downloaded seperately from the appropriate sites.GhostBuster: GhostBuster Setup (91520): Added WMI based RestorePoint support Removed test code from program.cs Improved counting. Changed color of ghosted but unfiltered devices. Changed HwEntries into an ObservableCollection. Added Properties Form. Added Properties MenuItem to Context Menu. Added Hide Unfiltered Devices to Context Menu. If you like this tool, leave me a note, rate this project or write a review or Donate to Ghostbuster. Donate to GhostbusterC#??????EXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPie_V3.2: V3.2, 2012?5?19? ????ORACLE??????。AvalonDock: AvalonDock 2.0.0795: Welcome to the Beta release of AvalonDock 2.0 After 4 months of hard work I'm ready to upload the beta version of AvalonDock 2.0. This new version boosts a lot of new features and now is stable enough to be deployed in production scenarios. For this reason I encourage everyone is using AD 1.3 or earlier to upgrade soon to this new version. The final version is scheduled for the end of June. What is included in Beta: 1) Stability! thanks to all users contribution I’ve corrected a lot of issues...myCollections: Version 2.1.0.0: New in this version : Improved UI New Metro Skin Improved Performance Added Proxy Settings New Music and Books Artist detail Lot of Bug FixingAspxCommerce: AspxCommerce1.1: AspxCommerce - 'Flexible and easy eCommerce platform' offers a complete e-Commerce solution that allows you to build and run your fully functional online store in minutes. You can create your storefront; manage the products through categories and subcategories, accept payments through credit cards and ship the ordered products to the customers. We have everything set up for you, so that you can only focus on building your own online store. Note: To login as a superuser, the username and pass...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1616.403): BUG FIX Hide save button when Titles or Descriptions element is selectedDotSpatial: DotSpatial 1.2: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.52: Make preprocessor comment-statements nestable; add the ///#IFNDEF statement. (Discussion #355785) Don't throw an error for old-school JScript event handlers, and don't rename them if they aren't global functions.DotNetNuke® Events: 06.00.00: This is a serious release of Events. DNN 6 form pattern - We have take the full route towards DNN6: most notably the incorporation of the DNN6 form pattern with streamlined UX/UI. We have also tried to change all formatting to a div based structure. A daunting task, since the Events module contains a lot of forms. Roger has done a splendid job by going through all the forms in great detail, replacing all table style layouts into the new DNN6 div class="dnnForm XXX" type of layout with chang...LogicCircuit: LogicCircuit 2.12.5.15: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing one but nasty bug. Two functions XOR and XNOR when used with 3 or more inputs were incorrectly evaluating their results. If you have a circuit that is using these functions...New ProjectsAdvanced CRM 2011 Auto Number: Advanced CRM 2011 AutoNumber is the most advanced auto numbering solution for CRM 2011 Online, On-Premise and Partner Hosted (IFD). You can virtually add any auto number to both system entities and custom entities. Here are all the features: * Supports CRM 2011 Online, On-Premise, Partner Hosted (IFD) for both Sandbox and Non-Sandbox mode. Also supports load balancing CRM instalations. * Guaranteed 100% unique autogenerated number * Supports both system entities and custom entit...alberguedeblm: Nós, membros do Departamento de Desenvolvimento Belém, hospedamos nossos projetos aqui.ChevonChristieCodeAndTools: This repo contains WP7 helper code and related Utilities, among other things, that I have accumulated across my projects. Simply open the solution in VS2010. All code is provided AS-IS and there is NO warranty for anything is this repo. If you would like to check out some of the applications this code came from head over to here: http://binaryred.com/portfolioCodeDom Assistant: Generating CodeDom Code By Parsing C# or VB ??C#??VB??CodeDom??ColaBBS: ColaBBS for .NET FrameworkDragonDen: DragonDEN project consists of SIMPLE SHARK PROMPT and DOS operating systems. For help in DOS type in '?' after logging in with username and password. Username:Drago Password:dospasswordDynamics XRM Tools: Dynamics XRM Tools brings you a quality range of applications that provide a useful set of features to enhance your experience while using and developing against Microsoft Dynamics CRM 2011. Currently available applications include OData Query Designer Metadata Browser CRM 4 to CRM 2011 JavaScript Converter Trace Tool Statistics About Dynamics XRM Tools The Dynamics XRM Tools project provides a modular framework for hosting Silverlight applications within a single shel...Employee Info System: Employee Info SystemFile upload control - MS CRM 2011: File upload Plug-in in MS CRM 2011 The objective of this plug-in is to provide functionality of using file upload feature inside the MS CRM 2011 forms. Also using this plug-in provide file extension control, multiple file uploads in one form, configurable labels and much more. It is very easy to use this. Proper steps are provided in the documentations and plug-in is provided for download. GadgeteerCookbook: A collection of projects for Microsoft .NET GadgeteerGraboo: Grabooo Imagine Cup App! ;)How High is It: A tool used by windows phone to calculate the height of a building or something else.H-Share: H-Share is my personal sharing tool try-out projectInternals Viewer (updated) for SQL Server 2008 R2.: Internals viewer is a tool for looking into the SQL Server storage engine and seeing how data is physically allocated, organised and stored. All sorts of tasks performed by a DBA or developer can benefit greatly from knowledge of what the storage engine is doing and how it works. This version is for SSMS 2008 R2.MetaWeblog Utility: A .NET library for Meta Weblog operations. To make it easy to use, library includes top blog sites' proxy/agent.MetroRate: MetroRate is a control for displaying ratings with stars in Windows 8 WinRT Metro XAML apps.Ph?n m?m qu?n lý ký túc xá sinh viên ÐH Tây Nguyên: Ph?n m?m qu?n lý ký túc xá sinh viên ÐH Tây Nguyên Liên h? d? bi?t thêm chi ti?t nhá!ProceXSS: ProceXSS is a Asp.NET Http module for detecting and ignoring xss attacks.RFIDCashier: Its RFID scanner tool SaharMediaPlayer: gggScienceLP: ScienceLPsilverlight4Stu: ????Stackr Programming Language: A stack-based programming language, initially targeted for trans-compilation to DCPU-16 assembly.Timeline example VB.net: timeline vbTKLSite: not for youVolta AutoParts: Volta AutoParts is light weight website, used for let more and more people know Volta's Auto accessories and parts. We are developers on Microsoft.NET, this project is also an opportunity for us on cooperation.????????: good

    Read the article

  • Google Chrome - NETWORK_IP_ADDRESSES_CHANGED - pages only partially load

    - by Julian
    Hi Google Chrome version (8.0.552.224 (Official Build 68599)) I have developed a web site using asp.net mvc and jquery. it uses a lot of ajax. I noticed that occasionally a web page does not complete loading all files from the server. Browsing through forums It seems that this also happens to other people. Looking in to the Chromium Net-Internals ( type: chrome://net-internals/ as a url in the chrome browser ) I noticed that the pages do not complete loading if a NETWORK_IP_ADDRESSES_CHANGED event occurred. Any files that were not fetched from the server before the time of the NETWORK_IP_ADDRESSES_CHANGED event failed to arrive with error (-3) Do you know why the NETWORK_IP_ADDRESSES_CHANGED event occurs? Is there a way to stop it from happening? Thanks and be happy, Julian

    Read the article

  • Getting Cabal to work with GHC 6.12.1

    - by Dan Dyer
    I've installed the latest GHC package (6.12.1) on OS X, but I can't get Cabal to work. I've removed the version I had previously that worked with GHC 6.10 and tried to re-install from scratch. The latest Cabal version available for download is 1.6.0.2. However, when I try to build this I get the following error: Configuring Cabal-1.6.0.2... Setup: failed to parse output of 'ghc-pkg dump' From what I've found searching, this seems to suggest that the version of Cabal is too old for the version of GHC. Is there any way to get Cabal to work with GHC 6.12.1 yet? EDIT: To be clear, I'm trying to set-up cabal-install.

    Read the article

  • How to Add Serialized LINQ to SQL Entities to a Word 2007 Document

    - by Ryan Riley
    I built a template-based document generator using the Open XML SDK (1.0), the Word 2007 Content Control Toolkit and LINQ to SQL (using the CodeSmith PLINQO templates). To do this, I serialized the LINQ to SQL entities to XML by retrieving the entity using DataLoadOptions specified in the source code. This works great, except that to initially populate the XML in my template, I currently have to copy and paste the XML from the Immediate window in VS2008 into the Content Control Toolkit, and it still has all the data from the current entity. I'm looking for two solutions: 1) Is this a good way to build a document generator with Word 2007? 1) How can I generate just the XML I need without the data? I've thought of creating an XSD and then creating an empty XML document, but wasn't sure how to do that programatically so that a business user can get the XML for the template. (That's not a requirement, just a nice-to-have.) Thanks for your feedback, Ryan

    Read the article

  • iPhone - how to find a parent View

    - by Oscar Peli
    Hi there, I need some help to find a View inside a hierarchy. Here is how I build up the View stack. Inside my first UITableViewController I push an UIViewController that contains an UITabBarController: [[self navigationController] pushViewController:itemVC animated:YES]; Inside the UITabBarController I add an UITableViewController: ISSTableViewController *graphics = (ISSTableViewController *)[tabBarController.viewControllers objectAtIndex:3]; Inside the didSelectRowAtIndexPath I present a Modal View Controller using a UINavigationController: GraficoViewController *graph = [[GraficoViewController alloc] initWithNibName:@"GraficoViewController" bundle:nil]; UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:graph]; [self presentModalViewController:navigationController animated:YES]; [navigationController release]; Now the (BIG) question is: I have to hide the NagivationBar of my first UITableViewController inside my last view. I tried with this: [[[[[self parentViewController] parentViewController] parentViewController] navigationController] setNavigationBarHidden:YES]; but it doesn't work. Can someone tell me how I can find my ancestor View??? Thanks.

    Read the article

  • ClearCase: Versioning at file level

    - by Naveen
    Hi, I have a perculier problem about how to maintain a clearcase project. This project is a xml schema repository where each schema has a version. This repository is common and is used by all the apps in the enterprise. From clearcase prespective the project has a single component. Now the apps can be using different versions of the schema(s). So we are trying to figureout a way to setup the project in such way that a project can have a baseline of what versions of these files are included in a build. The only way we know of how to do this is to create a component for each schema or group of schemas and create a stream for each app to include the components they use. But that would result in too many components. Has anyone dealth with something like this before? We are prepared to restructre the whole project if necessary, so we are open to any idea. Thans for the help.

    Read the article

  • How can I set up a fault-tolerant web-service built with Erlang/OTP?

    - by Jonas
    I would like to setup a fault-tolerant web-service. I will build the web-service with Erlang/OTP. At the beginning the web-service will be hosted on a few VPS. Each VPS has its own IP-address, and I can use more if IPs if I need. I would like to have the domain name pointing to a single IP-address. How can setup my Erlang/OTP-application to be fault-tolerant behind a single IP-address? Do I need to use VLAN? Is there a way my Erlang/OTP-application can use heartbeats and handle virtual IP-addresses to route the traffic? or how should I solve this problem?

    Read the article

  • Android - making a layout like Entourage/Outlook calendar

    - by teepusink
    Hi, I'm trying to build a layout / view like the Entourage/Outlook calendar view, where when I schedule a time block, a new view will popup overlaying the time block. Something like this one except it's daily and not monthly like in the image http://images.appleinsider.com/office-2008-entourage-10.png What is the best way to implement that? Right now I'm using TableLayout to create the time on the 1st column and supposedly overlay to schedule made on the 2nd column. However, things got very messy especially when the minutes go to 15 minutes and there are overlapping schdule. Wonder if using other layout might work better? Thanks, Tee

    Read the article

  • Error installing RedCloth

    - by meta
    I'm getting this erron when trying to install RedCloth on openSuse: sudo gem install RedCloth Building native extensions. This could take a while... ERROR: Error installing RedCloth: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb creating Makefile make sh: make: nie znaleziono polecenia Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/RedCloth-4.2.3 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/RedCloth-4.2.3/ext/redcloth_scan/gem_make.out I tried to google this out and triend everything. So I need help with that.

    Read the article

  • Unable to read .rtf file in VS.NET 2008 Setup

    - by constant learner
    Hello All I have created a simple windows application in .NET 2008. Im packaging the same into a setup file using .NET Setup and Deployment. Also i am customizing it to include a License Agreement UI. And i am pointing the License Agreement window to read the license.rtf file which is being included in the Application folder. After successful build, if i run the setup file i can see the License Agreement window but i cannot see the content of my file. Any ideas what is the issue behind this ? Regards CL

    Read the article

  • Best practices for developing bigger applications on Android

    - by Janusz
    I've already written some small Android Applications, most of them in one Activity and nearly no data that should be persistent on the device. Now I'm writing an application that needs more Activities and I'm a bit puzzled about how to organize all this. My app will download some data parse it show it to the user and then show other activities depending on the data and the user interaction. Some of that data could be cached, some of it has to be downloaded every time. Some of that data should not be downloaded freshly at the moment the orientation changes, but it should on the moment the activity is created... Another thing I'm confused about are things like a httpClient. I now for example create a new httpclient for every activity, the same thing for locationlisteners. Are there books, a blogs or documentations with patterns, examples and advice on organizing larger apps build on android? Everything I found until now are get startet tutorials leaving me alone after 60 lines of code...

    Read the article

  • WiX - create a bootstrap that passes arguments to the msiexec

    - by Dror Helper
    I need to create a bootstrap for my WiX project I've tried using setupbld.exe but it will only allow me to create an executable that will show my UI or one that will behave as a silent installer but not both. I need to be able to run the resulting executable with argument that will tell it wether or not to show the UI during installation. I've found this post by John Robbins that explains how to re-build the setup.exe stub used in the creation of the bootstrap but I was hoping there is a simpler way to do what I need. Does anyone know of a way to create a bootstrap that I use to run either as a simple (with UI) install or as a silent install.

    Read the article

  • Visual VoiceXML/VXML development tool?

    - by IVR Avenger
    Hi, all. Does anyone know of any tools out there that will let me run and debug a VXML application visually? There are a ton of VXML development tools, but they all require you to build your application within them. I have an existing application that uses JSPs to generate VXML, and I'm looking for a way to navigate through and debug the rendered VXML in much the same way that Firebug allows one to do this with HTML. I have some proxy-like tools that let me inspect the rendered code as it is sent to the VXML browser, but there's a ton of JS, which makes traversing the code by hand rather difficult. Has anyone worked with a product that allows for this? Thanks! IVR Avenger

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >