Search Results

Search found 12766 results on 511 pages for 'diagnostic tools'.

Page 200/511 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Automatically hyper-link URL's and Email's using C#, whilst leaving bespoke tags in place

    - by marcusstarnes
    I have a site that enables users to post messages to a forum. At present, if a user types a web address or email address and posts it, it's treated the same as any other piece of text. There are tools that enable the user to supply hyper-linked web and email addresses (via some bespoke tags/markup) - these are sometimes used, but not always. In addition, a bespoke 'Image' tag can also be used to reference images that are hosted on the web. My objective is to both cater for those that use these existing tools to generate hyper-linked addresses, but to also cater for those that simply type a web or email address in, and to then automatically convert this to a hyper-linked address for them (as soon as they submit their post). I've found one or two regular expressions that convert a plain string web or email address, however, I obviously don't want to perform any manipulation on addresses that are already being handled via the sites bespoke tagging, and that's where I'm stuck - how to EXCLUDE any web or email addresses that are already catered for via the bespoke tagging - I wan't to leave them as is. Here are some examples of bespoke tagging for the variations that I need to be left alone: [URL=www.msn.com]www.msn.com[/URL] [URL=http://www.msn.com]http://www.msn.com[/URL] [[email protected]][email protected][/EMAIL] [IMG]www.msn.com/images/test.jpg[/IMG] [IMG]http://www.msn.com/images/test.jpg[/IMG] The following examples would however ideally need to be automatically converted into web & email links respectively: www.msn.com http://www.msn.com [email protected] Ideally, the 'converted' links would just have the appropriate bespoke tags applied to them as per the initial examples earlier in this post, so rather than: <a href="..." etc. they'd become: [URL=http://www.. etc.) Unfortunately, we have a LOT of historic data stored with this bespoke tagging throughout, so for now, we'd like to retain that rather than implementing an entirely new way of storing our users posts. Any help would be much appreciated. Thanks.

    Read the article

  • Interpreting java.lang.NoSuchMethodError message

    - by Doog
    I get the following runtime error message (along with the first line of the stack trace, which points to line 94). I'm trying to figure out why it says no such method exists. java.lang.NoSuchMethodError: com.sun.tools.doclets.formats.html.SubWriterHolderWriter.printDocLinkForMenu( ILcom/sun/javadoc/ClassDoc;Lcom/sun/javadoc/MemberDoc; Ljava/lang/String;Z)Ljava/lang/String; at com.sun.tools.doclets.formats.html.AbstractExecutableMemberWriter.writeSummaryLink( AbstractExecutableMemberWriter.java:94) Line 94 of writeSummaryLink is shown below. QUESTIONS What does "ILcom" or "Z" mean? Why there are four types in parentheses (ILcom/sun/javadoc/ClassDoc;Lcom/sun/javadoc/MemberDoc;Ljava/lang/String;Z) and one after the parentheses Ljava/lang/String; when the method printDocLinkForMenu clearly has five parameters? CODE DETAIL The writeSummaryLink method is: protected void writeSummaryLink(int context, ClassDoc cd, ProgramElementDoc member) { ExecutableMemberDoc emd = (ExecutableMemberDoc)member; String name = emd.name(); writer.strong(); writer.printDocLinkForMenu(context, cd, (MemberDoc) emd, name, false); // 94 writer.strongEnd(); writer.displayLength = name.length(); writeParameters(emd, false); } Here's the method line 94 is calling: public void printDocLinkForMenu(int context, ClassDoc classDoc, MemberDoc doc, String label, boolean strong) { String docLink = getDocLink(context, classDoc, doc, label, strong); print(deleteParameterAnchors(docLink)); }

    Read the article

  • ISBNs are used as primary key, now I want to add non-book things to the DB - should I migrate to EAN

    - by fish2000
    I built an inventory database where ISBN numbers are the primary keys for the items. This worked great for a while as the items were books. Now I want to add non-books. some of the non-books have EANs or ISSNs, some do not. It's in PostgreSQL with django apps for the frontend and JSON api, plus a few supporting python command-line tools for management. the items in question are mostly books and artist prints, some of which are self-published. What is nice about using ISBNs as primary keys is that in on top of relational integrity, you get lots of handy utilities for validating ISBNs, automatically looking up missing or additional information on the book items, etcetera, many of which I've taken advantage. some such tools are off-the-shelf (PyISBN, PyAWS etc) and some are hand-rolled -- I tried to keep all of these parts nice and decoupled, but you know how things can get. I couldn't find anything online about 'private ISBNs' or 'self-assigned ISBNs' but that's the sort of thing I was interested in doing. I doubt that's what I'll settle on, since there is already an apparent run on ISBN numbers. should I retool everything for EAN numbers, or migrate off ISBNs as primary keys in general? if anyone has any experience with working with these systems, I'd love to hear about it, your advice is most welcome.

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • java 6 web services share domain specific classes between server and client

    - by user173446
    Hi all, Context: Considering below defined Engine class being parameter of some webservice method. As we have both server and client in java we may have some benefits (???) in sharing Engine class between server and client ( i.e we may put in a common jar file to be added to both client and server classpath ) Some benefits would be : we keep specific operations like 'brushEngine' in same place build is faster as we do not need in our case to generate java code for client classes but to use them from the server build) if we later change server implementation for 'brushEngine' this is reflected automatically in client . Questions: How to share below detailed Engine class using java 6 tools ( i.e wsimport , wsgen etc )? Is there other tools for java that can achieve this sharing ? Is sharing a case that java 6 web services support is missing ? Can this case be reduced to other web service usage patterns? Thanks. Code: public class Engine { private String engineData; public String getData(){ return data; } public setData(String value){ this.data = value; } public void brushEngine(){ engineData = "BrushedEngine"+engineData; } }

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • How do you make life easier for yourself when developing a really large database

    - by Hannes de Jager
    I am busy developing 2 web based systems with MySql databases and the amount of tables/views/stored routines is really becoming a lot and it is more and more challenging to handle the complexity. Now in programming languages we have namespacing e.g. Java packages, C++ namespaces to partition the software, grouping it together to make things more understandable. Databases on the other hand have more of a flat structure (MySql at least) e.g. tables and stored procedures are on the same level. So one have to be more creative, creating naming conventions, perhaps use more than one database or using tools to visualize things. What methods do you use to ease the pain? To be effective while developing your databases? To not get lost in a sea of tables and fields and stored procs? Feel free to mention tools you use also, but try to restrict it to open source and preferably Linux solutions if thats OK. b.t.w How many tables would a database have to be considered large in terms of design?

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • "Android Create" call fails in windows 7 - missing JDK

    - by reuscam
    I'm having a problem getting my android dev environment setup in Windows 7. I follow the instructions here, as well as several environment sublinks. I am using Eclipse with the Android plugin. I have installed the Java JDK several times, in various locations (jdk-6u20-windows-i586.exe) - but I am obviously missing something. Every time I run "android create avd --target 2 --name my_avd" I get an error: C:\Users\andrew>android create avd --target 2 --name my_avd WARNING: Java not found in your path. Checking it it's installed in C:\Program Files\Java instead. ERROR: No suitable Java found. In order to properly use the Android Developer Tools, you need a suitable version of Java installed on your system. We recommend that you install the JDK version of JavaSE, available here: http://java.sun.com/javase/downloads/ You can find the complete Android SDK requirements here: http://developer.android.com/sdk/requirements.html This error message is the reason for me installing the JDK several times over. First I tried installing to a location on my e: drive. I then moved it to the default loc (program files (x86)\java\jdk.6.something. I also tried forcing it to go into the program files\ path, but it still automatically installs into the (x86) path. I have added the install path to my path environment variable every single time, yet I still continue to get this error. My suspicion is that windows 7 and the android tools are not playing together well in terms of finding the JDK, but who knows, it may be something entirely different. If you have seen this error before, I would appreciate a hint.

    Read the article

  • pdf external streams in Max OS X Preview

    - by olpa
    According to the specification, a part of a PDF document can reside in an external file. An example for an image: 2 0 obj << /Type /XObject /Subtype /Image /Width 117 /Height 117 /BitsPerComponent 8 /Length 0 /ColorSpace /DeviceRGB /FFilter /DCTDecode /F (pinguine.jpg) >> stream endstream endobj I found that this functionality does work in Adobe Acrobat 5.0 for Windows (sample PDF with the image), also I managed to view this file in Adobe Acrobat Reader 8.1.3 for Mac OS X after I found the setting "Allow external content". Unfortunately, it seems that non-Adobe tools ignore the external stream feature. I hope I'm wrong, therefore ask the question: How to enable external streams in Mac OS X? (I think that all the system Mac OS X tools use the same library, therefore say "Mac OS X" instead of "Preview".) Or maybe there could be a programming hook to emulate external streams? My task is: store a big set of images (total ˜300Mb) outside of a small PDF (˜1Mb). At some moment, I want to filter PDF through a quartz filter and get a PDF with the images embedded. Any suggestions are welcome.

    Read the article

  • Matlab set defaultTextInterpreter to LaTeX

    - by Maurits
    I am running Matlab R2010A on OS X 10.7.5 I have a simple matlab plot and would like to use LaTeX commands in the axis and legend. However setting: set(0, 'defaultTextInterpreter', 'latex'); Has zero effect, and results in a TeX warning that my tex commands can not be parsed. If I open plot tools of this plot, the default interpreter is set to 'TeX'. Manually setting this to 'LaTeX' obviously fixes this, but I can't do this for hundreds of plots. Now, if I retrieve the default interpreter via the Matlab prompt, i.e get(0,'DefaultTextInterpreter') It says 'LaTeX', but again, when I look in the properties of the figure via the plot tools menu, the interpreter remains set to 'TeX'. Complete plotting code: figure f = 'somefile.eps' set(0, 'defaultTextInterpreter', 'latex'); ms = 8; fontSize = 18; loglog(p_m_sip, p_fa_sip, 'ko-.', 'LineWidth', 2, 'MarkerSize', ms); hold on; xlabel('$P_{fa}$', 'fontsize', fontSize); ylabel('$P_{m}$', 'fontsize', fontSize); legend('$\textbf{K}_{zz}$', 'Location', 'Best'); set(gca, 'XMinorTick', 'on', 'YMinorTick', 'on', 'YGrid', 'on', 'XGrid', 'on'); print('-depsc2', f);

    Read the article

  • Document management, SCM ?

    - by tsunade
    Hello, This might not be a hard core programming question, but it's related to some of the tools used by programmers I suspect. So we're a bunch of people each with a bunch of documents and a bunch of different computers on a bunch of operating systems (well, only 2, linux and windows). The best way these documents can be stored/managed is if they were available offline (the laptop might not always be online) but also synchronized between all the machines. Having a server with extra reliable storage be a "base repository" seems like a good idea to me. Using a SCM comes to my mind and I've tried Subversion, and it seems to be a good thing that it uses a centralized repository - but: When checking out the total size of the checkout is roughly double the original size. Big files or big repositories seem to slow it down. Also I've tried rsync, which might work - but it's a bit rough when it comes to the potential conflict. Finally I've tried Unison (which is a wrapping of rsync, I think) and while it works it becomes horribly slow for the big directories we have here since it has to scan everything. So the question is - is there a SCM tool out there that is actually practial to use for a big bunch of both small and big files? If thats a NO - does anyone know other tools that do this job? Thanks for reading :)

    Read the article

  • An algo for generating code callgraphs

    - by Shrey
    I am working on a project which requires generating some metrices of a code (it can be C/C++/Java/Python). One of the metrices can be that I create a callgraph after parsing the code entered (the programs are expected to be small - probably under 1000L). As of now, I am looking for a way to create a program (it can be C/Python) which can take as input a file (C/C++/Python/Java) and then create a textual output containing approximate calling sequence as well as tokens in the code file. As of now, I have looked at some other tools which do the same thing - like splint, pylint, codeviz etc. So, I have two ways of solving my problem: Read and understand the algorithm these tools use (tokenization-graph generation etc) Or, have a basic algo (something like very high level steps) and then sit down to create each of them as I want them to be. I know, re-inventing the wheel is not a good idea, but, I would still like to give option (2) a shot. Only issue is, currently I am a blank. My question: Does any one have any knowhow about how to create code graphs? Any hints as to what I should do? Any top levels steps which I can follow? Thanks a lot.

    Read the article

  • ASP.NET MVC - Wrong redirecting, how to debug?

    - by Xorty
    I am stuck with redirecting problem in ASP.NET MVC project. I have mapped tables via LINQtoSQL and each has unique ID as primary key. I am implementing functionallity of 'CREATE'. Basically, after new value is added into SQL table (which means I pressed Save button), I want to be redirected to Details of this freshly added item. Here's little code how I am doing it : [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Create(Item item) { .... return RedirectToAction("Details", new { id = item.ItemID }); Trouble is, I am never redirected to Details view (I have Details.aspx view for items). When I check CallHierarchy in Visual Studio (2010 pro) the hierarchy is indeed little strange, like this : RedirectToAction(string,object) Calls To 'RedirectToAction' Create Calls To Create (no results) Calls From Create (methods of created instance. From there I'll get back to 'RedirectToAction' and to 'Calls to Create' and 'Calls From Create' etc. etc. - loop Edit Calls From 'RedirectToAction' Not supported I am looking for some tools or more specifically 'know how' (since VS probably has some tools) to debug this kind of situations. PS: rooting is default :"{controller}/{action}/{id}", Thanks

    Read the article

  • How do I produce a screenshot of a flash swf on the server?

    - by indignant
    I'm writing a flash app using the open source tools. I would like to load a data file in to the app and capture a screenshot of the stage on the server. The only part that seems mysterious is running the app on the server. In fact, I don't even care if it's the same app running on the server and in the browser--if I can use the flash stage and drawing routines to produce an image server-side, I'm happy. If I have to delve in to flex, fine. Right now I'm having problems finding any starting point at all. I gather Adobe has some commercial products that may fit the bill, but I'd like to stick with open source, apache, and linux. I know this is probably possible with haxe/neko, but I'd like to use more mainstream tools if possible. Am I asking too much? EDIT/CLARIFICATION: Many thanks for the responses so far, but I think I've been a bit muddy in my description. I've already written the actual stage-grabbing stuff using the same PNGEncoder class as was suggested. The problem is in actually running the swf on the server side. I don't want to let the client take the screen shot itself, because this opens up the possibility of the client maliciously submitting a screenshot which does not correspond to what is on the stage, that is, I don't want users uploading porn. If I could run the the actionscript code on the server, then I could generate the screenshot from my data files and be sure that the screenshot matches the data, but I have no idea how to run the actionscript or swf on the server.

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • DNS protocol message example

    - by virtual-lab
    hello there, I am trying to figure out how to send out DNS messages from an application socket adapter to a DNSBL. I spent the last two days understanding the basics, including experimenting with WireShark to catch an example of message exchanged. Now I would like to query the DNS without using dig or host command (I'm using Ubuntu); how can I perform this action at low level, without the help of these tools in wrapping the request in a proper DNS message format? How the message should be post it? Hex or String? Thanks in advance for any help. Regards Alessandro Ilardo Comment added I am investigating on JDev and Oracle SOA. The platform provides a Socket Adapter which simply apply a transformation (XSLT) and send the message straight to the socket. How the payload parameters (ex. the host I'm looking up) are wrapped within the message is left to the developer. So basically I have an idea on how the all DNS message is structured, but rather than put everything on JDev stright away I'd like to make some tests on my own just to make sure I got a valid message format. So, I am not using any specific language (I don't even understand why they moved my question from serverfault) and I don't want to use any tools which would hide part of the message, such as the header. I know they work well btw. I guess this stuff has something to do with packet injection. Someone suggested me to use telnet, but I've only used for SMTP or HTTP, I haven't got a clue on how it works for DNS request. Does it make more sense now?

    Read the article

  • Sync version of the project with Java

    - by Alexadar
    I have a problem, I need to sync version of the project. Changes are performed on main project that resides on central Coldfusion server. There are about 30 remote Coldfusion servers, that has to sync with latest version on central server. New synch application will to be done in Java! I have a direct link to each remote location. Mostly Windows OS and Windows tools are used. My idea: The synchronization is performed at the level of synchronization of folders. I would like to avoid the use of SVN and similar tools. My idea is to carry out comparison of folders on the local server (comparison is performed between the old and new versions),with NO communication with remote servers, make the difference between the versions and the same is sent to the remote machines. We will install Tomcat on every remote server. "Sync application" deployed on Tomcat will take care about differences. Remote server will return me an answer, about the success of the sync. Any kind of suggestion or completely new approach on this topic is more than welcome. Thanks Best regards

    Read the article

  • What are the benefits and risks of moving to a Model Driven Architecture approach?

    - by Tone
    I work for a company with about 350 employees and we are in the process of growing. Our current codebase is not structured very well and we are looking both at how to improve it immediately (by organizing objects into namespaces, separating concerns, etc.) and moving to a model driven architecture approach, where we model and design everything first with uml, then generate code from that model. We have been looking heavily at Sparx Systems Enterprise Architect (EA) (which is UML 2.0 capable) and we are also considering the tools in VS 2010. I know there are other tools out there (Rational XDE being one) but I really do not think we can spend $1500+ per license at this point. I'm not looking for answers on which tool is better than another but more for experiences moving from a cowboy coding environment (that is, little planning and design, just jump in and start coding) to a model driven architecture. Looking back was it helpful to your organization? What are the pain points? What are the risks? What are the benefits?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Need some clarification on the ANSI/SPARC 3-tier database architecture.

    - by Moonshield
    Hi there, I'm currently revising for a databases exam and looking over some past papers, but there's one question that I'm slightly unsure about and was wondering if someone could offer some assistance. "Describe EACH of the THREE levels of the ANSI SPARC 3 level architecture. Your answer should include the purpose of EACH of the schemas, the level of abstraction they provide and the software tools that would be used to access and support them." As I understand it (although please correct me if I'm wrong): the internal schema specifies the physical storage of the data; the conceptual schema specifies the structure of the database and the domains; and the external schemas are how the database is viewed by "users" (applications, etc.). As for the abstraction, I understand that the conceptual layer means that the physical data storage can be altered without the end user being affected, likewise the The bit that I'm not sure about is what tools are used to access and support each layer. Would the internal schema be handled by the DBMS, the conceptual schema handled by some sort of DDL interpreter and the external schema handled by a DML interpreter (or have I misunderstood what each level does)? Any assistance would be greatly appreciated. Thanks, Moonshield

    Read the article

  • Why the HelloWorld of opennlp library works fine on Java but doesn't work with Jruby?

    - by 0x90
    I am getting this error: SyntaxError: hello.rb:13: syntax error, unexpected tIDENTIFIER public HelloWorld( InputStream data ) throws IOException { The HelloWorld.rb is: require "java" import java.io.FileInputStream; import java.io.InputStream; import java.io.IOException; import opennlp.tools.postag.POSModel; import opennlp.tools.postag.POSTaggerME; public class HelloWorld { private POSModel model; public HelloWorld( InputStream data ) throws IOException { setModel( new POSModel( data ) ); } public void run( String sentence ) { POSTaggerME tagger = new POSTaggerME( getModel() ); String[] words = sentence.split( "\\s+" ); String[] tags = tagger.tag( words ); double[] probs = tagger.probs(); for( int i = 0; i < tags.length; i++ ) { System.out.println( words[i] + " => " + tags[i] + " @ " + probs[i] ); } } private void setModel( POSModel model ) { this.model = model; } private POSModel getModel() { return this.model; } public static void main( String args[] ) throws IOException { if( args.length < 2 ) { System.out.println( "HelloWord <file> \"sentence to tag\"" ); return; } InputStream is = new FileInputStream( args[0] ); HelloWorld hw = new HelloWorld( is ); is.close(); hw.run( args[1] ); } } when running ruby HelloWorld.rb "I am trying to make it work" when I run the HelloWorld.java "I am trying to make it work" it works perfectly, of course the .java doesn't contain the require java statement. EDIT: I followed the following steps. The output for jruby -v : jruby 1.6.7.2 (ruby-1.8.7-p357) (2012-05-01 26e08ba) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_35) [darwin-x86_64-java]

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >