Search Results

Search found 28425 results on 1137 pages for 'source encoding'.

Page 28/1137 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • Do support sites like Stack Overflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the Stack Exchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, Stack Overflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc?

    Read the article

  • Belgrade Open Source Software Development Center

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. Here's video describing the NetBeans UML plugin: University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter). 

    Read the article

  • Are there any good examples of open source C# projects with a large number of refactorings?

    - by Arjen Kruithof
    I'm doing research into software evolution and C#/.NET, specifically on identifying refactorings from changesets, so I'm looking for a suitable (XP-like) project that may serve as a test subject for extracting refactorings from version control history. Which open source C# projects have undergone large (number of) refactorings? Criteria A suitable project has its change history publicly available, has compilable code at most commits and at least several refactorings applied in the past. It does not have to be well-known, and the code quality or number of bugs is irrelevant. Preferably the code is in a Git or SVN repository. The result of this research will be a tool that automatically creates informative, concise comments for a changeset. This should improve on the common development practice of just not leaving any comments at all. EDIT: As Peter argues, ideally all commit comments would be teleological (goal-oriented). Practically, if a comment is made at all it is often descriptive, merely a summary of the changes. Sadly we're a long way from automatically inferring developer intentions!

    Read the article

  • Anatomy of a .NET Assembly - Custom attribute encoding

    - by Simon Cooper
    In my previous post, I covered how field, method, and other types of signatures are encoded in a .NET assembly. Custom attribute signatures differ quite a bit from these, which consequently affects attribute specifications in C#. Custom attribute specifications In C#, you can apply a custom attribute to a type or type member, specifying a constructor as well as the values of fields or properties on the attribute type: public class ExampleAttribute : Attribute { public ExampleAttribute(int ctorArg1, string ctorArg2) { ... } public Type ExampleType { get; set; } } [Example(5, "6", ExampleType = typeof(string))] public class C { ... } How does this specification actually get encoded and stored in an assembly? Specification blob values Custom attribute specification signatures use the same building blocks as other types of signatures; the ELEMENT_TYPE structure. However, they significantly differ from other types of signatures, in that the actual parameter values need to be stored along with type information. There are two types of specification arguments in a signature blob; fixed args and named args. Fixed args are the arguments to the attribute type constructor, named arguments are specified after the constructor arguments to provide a value to a field or property on the constructed attribute type (PropertyName = propValue) Values in an attribute blob are limited to one of the basic types (one of the number types, character, or boolean), a reference to a type, an enum (which, in .NET, has to use one of the integer types as a base representation), or arrays of any of those. Enums and the basic types are easy to store in a blob - you simply store the binary representation. Strings are stored starting with a compressed integer indicating the length of the string, followed by the UTF8 characters. Array values start with an integer indicating the number of elements in the array, then the item values concatentated together. Rather than using a coded token, Type values are stored using a string representing the type name and fully qualified assembly name (for example, MyNs.MyType, MyAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0123456789abcdef). If the type is in the current assembly or mscorlib then just the type name can be used. This is probably done to prevent direct references between assemblies solely because of attribute specification arguments; assemblies can be loaded in the reflection-only context and attribute arguments still processed, without loading the entire assembly. Fixed and named arguments Each entry in the CustomAttribute metadata table contains a reference to the object the attribute is applied to, the attribute constructor, and the specification blob. The number and type of arguments to the constructor (the fixed args) can be worked out by the method signature referenced by the attribute constructor, and so the fixed args can simply be concatenated together in the blob without any extra type information. Named args are different. These specify the value to assign to a field or property once the attribute type has been constructed. In the CLR, fields and properties can be overloaded just on their type; different fields and properties can have the same name. Therefore, to uniquely identify a field or property you need: Whether it's a field or property (indicated using byte values 0x53 and 0x54, respectively) The field or property type The field or property name After the fixed arg values is a 2-byte number specifying the number of named args in the blob. Each named argument has the above information concatenated together, mostly using the basic ELEMENT_TYPE values, in the same way as a method or field signature. A Type argument is represented using the byte 0x50, and an enum argument is represented using the byte 0x55 followed by a string specifying the name and assembly of the enum type. The named argument property information is followed by the argument value, using the same encoding as fixed args. Boxed objects This would be all very well, were it not for object and object[]. Arguments and properties of type object allow a value of any allowed argument type to be specified. As a result, more information needs to be specified in the blob to interpret the argument bytes as the correct type. So, the argument value is simple prepended with the type of the value by specifying the ELEMENT_TYPE or name of the enum the value represents. For named arguments, a field or property of type object is represented using the byte 0x51, with the actual type specified in the argument value. Some examples... All property signatures start with the 2-byte value 0x0001. Similar to my previous post in the series, names in capitals correspond to a particular byte value in the ELEMENT_TYPE structure. For strings, I'll simply give the string value, rather than the length and UTF8 encoding in the actual blob. I'll be using the following enum and attribute types to demonstrate specification encodings: class AttrAttribute : Attribute { public AttrAttribute() {} public AttrAttribute(Type[] tArray) {} public AttrAttribute(object o) {} public AttrAttribute(MyEnum e) {} public AttrAttribute(ushort x, int y) {} public AttrAttribute(string str, Type type1, Type type2) {} public int Prop1 { get; set; } public object Prop2 { get; set; } public object[] ObjectArray; } enum MyEnum : int { Val1 = 1, Val2 = 2 } Now, some examples: Here, the the specification binds to the (ushort, int) attribute constructor, with fixed args only. The specification blob starts off with a prolog, followed by the two constructor arguments, then the number of named arguments (zero): [Attr(42, 84)] 0x0001 0x002a 0x00000054 0x0000 An example of string and type encoding: [Attr("MyString", typeof(Array), typeof(System.Windows.Forms.Form))] 0x0001 "MyString" "System.Array" "System.Windows.Forms.Form, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" 0x0000 As you can see, the full assembly specification of a type is only needed if the type isn't in the current assembly or mscorlib. Note, however, that the C# compiler currently chooses to fully-qualify mscorlib types anyway. An object argument (this binds to the object attribute constructor), and two named arguments (a null string is represented by 0xff and the empty string by 0x00) [Attr((ushort)40, Prop1 = 12, Prop2 = "")] 0x0001 U2 0x0028 0x0002 0x54 I4 "Prop1" 0x0000000c 0x54 0x51 "Prop2" STRING 0x00 Right, more complicated now. A type array as a fixed argument: [Attr(new[] { typeof(string), typeof(object) })] 0x0001 0x00000002 // the number of elements "System.String" "System.Object" 0x0000 An enum value, which is simply represented using the underlying value. The CLR works out that it's an enum using information in the attribute constructor signature: [Attr(MyEnum.Val1)] 0x0001 0x00000001 0x0000 And finally, a null array, and an object array as a named argument: [Attr((Type[])null, ObjectArray = new object[] { (byte)2, typeof(decimal), null, MyEnum.Val2 })] 0x0001 0xffffffff 0x0001 0x53 SZARRAY 0x51 "ObjectArray" 0x00000004 U1 0x02 0x50 "System.Decimal" STRING 0xff 0x55 "MyEnum" 0x00000002 As you'll notice, a null object is encoded as a null string value, and a null array is represented using a length of -1 (0xffffffff). How does this affect C#? So, we can now explain why the limits on attribute arguments are so strict in C#. Attribute specification blobs are limited to basic numbers, enums, types, and arrays. As you can see, this is because the raw CLR encoding can only accommodate those types. Special byte patterns have to be used to indicate object, string, Type, or enum values in named arguments; you can't specify an arbitary object type, as there isn't a generalised way of encoding the resulting value in the specification blob. In particular, decimal values can't be encoded, as it isn't a 'built-in' CLR type that has a native representation (you'll notice that decimal constants in C# programs are compiled as several integer arguments to DecimalConstantAttribute). Jagged arrays also aren't natively supported, although you can get around it by using an array as a value to an object argument: [Attr(new object[] { new object[] { new Type[] { typeof(string) } }, 42 })] Finally... Phew! That was a bit longer than I thought it would be. Custom attribute encodings are complicated! Hopefully this series has been an informative look at what exactly goes on inside a .NET assembly. In the next blog posts, I'll be carrying on with the 'Inside Red Gate' series.

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Open Source CMS (.Net vs Java)

    - by CmsAndDotNetKindaGuy
    I must say up-front that this is not a strictly programming-related question and a very opinionated one. I am the lead developer of the dominant .Net CMS in my country and I don't like my own product :). Managerial decisions over what is important or not and large chunks of legacy code done before I joined gives me headache every day I go for work. Anyway, having a vast amount of experience in web industry and a very good grasp of C# and programming practices I'm designing my own CMS the past few months. Now the problem is that I'm an open source kinda guy so I have a dilemma. Should I use C# and .Net which has crippled multi-platform support or should I drop .Net entirely and start learning Java where I can create a truly open-source and cross-platform CMS? Please don't answer that I should contribute to an existing open source CMS. I ruled that out after spending a great deal of time searching for something similar in structure to what I have in mind. I am not a native speaker, so feel free to correct my syntax or rephrase my question.

    Read the article

  • WPF Image change source when button is disabled

    - by Taylor
    Hi, I'm trying to show a different image when the button is disabled. Should be easy with triggers, right?! For some reason, I have not been able to get the images to switch. I've tried setting triggers on both the image and button. What is wrong with what I have below? How can I change the image source when the button is enabled/disabled? Thanks! <Button x:Name="rleft" Command="{Binding Path=Operation}" CommandParameter="{x:Static vm:Ops.OpA}"> <Button.Content> <StackPanel> <Image Width="24" Height="24" RenderOptions.BitmapScalingMode="NearestNeighbor" SnapsToDevicePixels="True" Source="/MyAssembly;component/images/enabled.png"> <Image.Style> <Style> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=rleft, Path=Button.IsEnabled}" Value="False"> <Setter Property="Image.Source" Value="/MyAssembly;component/images/disabled.png" /> </DataTrigger> </Style.Triggers> </Style> </Image.Style> </Image> </StackPanel> </Button.Content> </Button>

    Read the article

  • Why does Komodo edit not open files on doubleclick due to some path encoding error?

    - by gakera
    When I double click a file in finder to open in Komodo edit, it displays an error message saying it can't find the file and the path is displayed, but with wrong encoding, such that special characters such as "ó" and "ð" are displayed as some weird boxes. Screenshot: http://i51.tinypic.com/2qkleg4.jpg I'm running OSX Snow Leopard 10.6.4 and Komodo Edit version 5: "Komodo Edit, version 5.2.4, build 4343, platform macosx-x86. Built on Tue Dec 8 18:18:35 2009." When I drag the file into Komodo edit it opens just fine displaying the path correctly with "ó"s and all :P This doesn't happen if I right click the file in finder and select to open it with TextEdit or Word or (God forbid) MacVim - only when I select to open it in Komodo edit. This is annoying as all hell.

    Read the article

  • Where is the Open Source alternative to WPF?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's blindingly apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of user interfaces (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's painfully obvious to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Why hasn't anyone in the Open Source community (AFAIK) even scratched the surface of this issue. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Has no one in the OS community looked into developing something similar because they don't see the value, or because it's not worth the effort?

    Read the article

  • Java Trying to get a line of source from a website

    - by dsta
    I'm trying to get one line of source from a website and then I'm returning that line back to main. I keep on getting an error at the line where I define InputStream in. Why am I getting an error at this line? public class MP3LinkRetriever { private static String line; public static void main(String[] args) { String link = "www.google.com"; String line = ""; while (link != "") { link = JOptionPane.showInputDialog("Please enter the link"); try { line = Connect(link); } catch(Exception e) { } JOptionPane.showMessageDialog(null, "MP3 Link: " + parseLine(line)); String text = line; Toolkit.getDefaultToolkit( ).getSystemClipboard() .setContents(new StringSelection(text), new ClipboardOwner() { public void lostOwnership(Clipboard c, Transferable t) { } }); JOptionPane.showMessageDialog(null, "Link copied to your clipboard"); } } public static String Connect(String link) throws Exception { String strLine = null; InputStream in = null; try { URL url = new URL(link); HttpURLConnection uc = (HttpURLConnection) url.openConnection(); in = new BufferedInputStream(uc.getInputStream()); Reader re = new InputStreamReader(in); BufferedReader r = new BufferedReader(re); int index = -1; while ((strLine = r.readLine()) != null && index == -1) { index = strLine.indexOf("<source src"); } } finally { try { in.close(); } catch (Exception e) { } } return strLine; } public static String parseLine(String line) { line = line.replace("<source", ""); line = line.replace(" src=", ""); line = line.replace("\"", ""); line = line.replace("type=", ""); line = line.replace("audio/mpeg", ""); line = line.replace(">", ""); return line; } }

    Read the article

  • Re-encoding a video file and increasing the size -- does this improve the quality?

    - by Josh
    If I have a video file at 320x240 resolution which I want to re-encode (because I don't like the encoding it's in now) and I also want to play it at double size (640x480), will I get higher quality if I scale it up to 640x480 when I convert it to a new format, verses keeping it at 320x240 in the new format and playing it at double size? This probably depends on the program used to convert, and if so, please let me know any program which might increase the quality. Here's my thinking. If I play a 320x240 file at double size, the system has to scale up each frame in real time, whereas if I scale up while recompressing the system may be able to use a more intensive algorythm like Bicubic interpolation . However I am not sure if this is true or not.

    Read the article

  • Programmatically parse and edit C++ Source Files

    - by Kryten
    Hi, I want to able programmatically parse and edit C++ source files. I need to be able to change/add code in certain sections of code (i.e. in functions, class blocks, etc). I would also (preferably) be able to get comments as well. Part of what I want to do can be explained by the following piece of code: CPlusPlusSourceParser cp = new CPlusPlusSourceParser(“x.cpp”); // Create C++ Source Parser Object CPlusPlusSourceFunction[] funcs = cp.getFunctions(); // Get all the functions for (int i = 0; i &lt funcs.length; i++) { // Loop through all functions funcs[i].append(/* … code I want to append …*/); // Append some code to function } cp.save(); // Save new source cp.close(); // Close file How can I do that? I’d like to be able to do this preferably in Java, C++, Perl, Python or C#. However, I am open to other language API’s.

    Read the article

  • Pros and Cons of Proprietary Software

    - by Jon Purdy
    Proprietary software is about as good as open-source software. There are so many problems with proprietary technologies, however, that I'm beginning to think it's best to avoid them: The software will only be maintained as long as the company exists (and profits). The level of security of the application is as unknowable as the source code. Alterations and derivative works, however necessary and beneficial, are disallowed. I simply don't see any point in even learning to use such systems as those created by Microsoft and Apple. Of course I don't pretend that ignorance is the superior option: one has to have a certain working knowledge simply because of the ubiquity of these things. I just don't see any reason why, as an independent developer, I should ever consider it a remotely good idea to actually use them. So that's the question, or discussion topic, or what have you: In what ways do developers benefit at all from using closed-source development software?

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • Unable to debug an encodded javascript?

    - by miles away
    I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } </script> I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js.

    Read the article

  • How to download source code with apt-get?

    - by Muurverf
    I'm trying to download source code of certain packages, for example rhythmbox, for learning purposes. I want to do this through apt-get, with the apt-get source command. For some reason, apt-get can't seem to find any package. I've tried several packages, and I keep getting this output from apt-get: $ apt-get source rhythmbox Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for rhythmbox I've been searching for answers for quite some time now, but I can't find anyone with the same issue. apt-get works fine with installing and updating so apt-get works fine in my opinion. During the search I also learned that I had to enable the 'source code repositories' in Software Sources, so that's also enabled. I am aware of the fact that (maybe only for certain packages, not sure) source packages can also be downloaded with bzr from Launchpad, but I want to know why this method won't work. Thanks in advance!

    Read the article

  • GPU On-the-fly encoding of video through Logitech HD Webcam C510

    - by Ashfame
    Originally asked here but I edited that to move this question as a different one on suggestion by a fellow member. I read that I should have a Core2Duo 2.2Ghz for 720p but I have a 2.0Ghz one, would it be possible for me to first record it and then encode it after recording if my processor really start giving issue when doing on-the-fly encoding? I also have a ATI HD 4850 512MB card, if it can help in encoding on-the-fly or is there a chance that my graphics card alone can handle it and those specs were just for a system without a graphics card? I believe so. Also, I got no worries in dealing with console, if I have to do some of the things above in terminal. Other possible significant details: I have a dual screen setup 29" (1360X768) & 22" (1680X1050) which might be using some good power from GPU and I have 2GB DDR2 800Mhz RAM.

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • What encoding does InstallShield expect non-latin-alphabet string table entries to use?

    - by DNS
    I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language. This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding. The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?

    Read the article

  • How to test an application for correct encoding (e.g. UTF-8)

    - by Olaf
    Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage) I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences). I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well. What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically? Adding one possible answer upfront: I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "u?op ?pisdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage. Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted. Thank you for your input. *) that's "upside down" written "upside down" for those that cannot see those characters due to font problems

    Read the article

  • C# Open Source/Free Social Networking SDK

    - by Nix
    I am currently gathering some technology requirements for a site that will be a social network based. I don't want to re-invent the wheel so i am looking for some type of SDK or collection of tools that can provide me with a way of creating/managing a social network. I understand that no framework will probably fit my exact needs so I am also looking for a flexible/extendable framework. An example extension point would be allowing the user to provide sub networks, maybe a global network that could be sub classified as work and friends. Beyond that it would also be nice to somehow be able to import contacts from other networking sites (Facebook, Linked In, etc). My current technology suite will more than likely consist of the following: IIS 7.0 WCF Data Services SQL Server 2006 ASP.NET front end. So my two questions are 1) C# Open Source Social Network SDK 2) C# Open Source Social Netowrk APIs (facebook, linked in, etc) If there is any more information you may need please let me know.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >