Search Results

Search found 42 results on 2 pages for 'ramon'.

Page 2/2 | < Previous Page | 1 2 

  • Creating a Reverse Proxy using Jpcap

    - by Ramon Marco Navarro
    I need to create a program that receives HTTP request and forwards those requests to the web servers. I have successfully made this using only Java Sockets but the client needed the program to be implemented in Jpcap. I'd like to know if this is possible and what literature I should be reading for this project. This is what I have now by stitching together pieces from the Jpcap tutorial: import java.net.InetAddress; import java.io.*; import jpcap.*; import jpcap.packet.*; public class Router { public static void main(String args[]) { //Obtain the list of network interfaces NetworkInterface[] devices = JpcapCaptor.getDeviceList(); //for each network interface for (int i = 0; i < devices.length; i++) { //print out its name and description System.out.println(i+": "+devices[i].name + "(" + devices[i].description+")"); //print out its datalink name and description System.out.println(" datalink: "+devices[i].datalink_name + "(" + devices[i].datalink_description+")"); //print out its MAC address System.out.print(" MAC address:"); for (byte b : devices[i].mac_address) System.out.print(Integer.toHexString(b&0xff) + ":"); System.out.println(); //print out its IP address, subnet mask and broadcast address for (NetworkInterfaceAddress a : devices[i].addresses) System.out.println(" address:"+a.address + " " + a.subnet + " "+ a.broadcast); } int index = 1; // set index of the interface that you want to open. //Open an interface with openDevice(NetworkInterface intrface, int snaplen, boolean promics, int to_ms) JpcapCaptor captor = null; try { captor = JpcapCaptor.openDevice(devices[index], 65535, false, 20); captor.setFilter("port 80 and host 192.168.56.1", true); } catch(java.io.IOException e) { System.err.println(e); } //call processPacket() to let Jpcap call PacketPrinter.receivePacket() for every packet capture. captor.loopPacket(-1,new PacketPrinter()); captor.close(); } } class PacketPrinter implements PacketReceiver { //this method is called every time Jpcap captures a packet public void receivePacket(Packet p) { JpcapSender sender = null; try { NetworkInterface[] devices = JpcapCaptor.getDeviceList(); sender = JpcapSender.openDevice(devices[1]); } catch(IOException e) { System.err.println(e); } IPPacket packet = (IPPacket)p; try { // IP Address of machine sending HTTP requests (the client) // It's still on the same LAN as the servers for testing purposes. packet.dst_ip = InetAddress.getByName("192.168.56.2"); } catch(java.net.UnknownHostException e) { System.err.println(e); } //create an Ethernet packet (frame) EthernetPacket ether=new EthernetPacket(); //set frame type as IP ether.frametype=EthernetPacket.ETHERTYPE_IP; //set source and destination MAC addresses // MAC Address of machine running reverse proxy server ether.src_mac = new MacAddress("08:00:27:00:9C:80").getAddress(); // MAC Address of machine running web server ether.dst_mac = new MacAddress("08:00:27:C7:D2:4C").getAddress(); //set the datalink frame of the packet as ether packet.datalink=ether; //send the packet sender.sendPacket(packet); sender.close(); //just print out a captured packet System.out.println(packet); } } Any help would be greatly appreciated. Thank you.

    Read the article

  • Set username credential for a new channel without creating a new factory

    - by Ramon
    I have a backend service and front-end services. They communicate via the trusted subsystem pattern. I want to transfer a username from the frontend to the backend and do this via username credentials as found here: http://msdn.microsoft.com/en-us/library/ms730288.aspx This does not work in our scenerio where the front-end builds a backend service channel factory via: channelFactory = new ChannelFactory<IBackEndService>(.....); Creating a new channel is done via die channel factory. I can only set the credentials one time after that I get an exception that the username object is read-only. channelFactory.Credentials.Username.Username = "myCoolFrontendUser"; var channel = channelFactory.CreateChannel(); Is there a way to create the channel factory only one time as this is expensive to create and then specify username credential when creating a channel?

    Read the article

  • Test (with RSpec) a controller outside of a Rails environment

    - by ramon.tayag
    I'm creating a gem that will generate a controller for the Rails app that will use it. It's been a trial and error process for me when trying to test a controller. When testing models, it's been pretty easy, but when testing controllers, ActionController::TestUnit is not included (as described here). I've tried requiring it, and all similar sounding stuff in Rails but it hasn't worked. What would I need to require in the spec_helper to get the test to work? Thanks!

    Read the article

  • Getting RPXNow and Facebook Open Graph to Play Nicely

    - by ramon.tayag
    A requirement to use the RPXNow is to set your Facebook application's connect url to http://mydomain.rpxnow.com. I was just trying to implement Facebook's Open Graph and I see that it tells you to set the Base Domain to the domain that will contain the app_id. However, Facebook does not allow these two domains to look different. When I try to set the base url to mydomain.com, I get this error: Validation failed. Base Domain is not valid. Connect URL must be derived from your Base Domain. Should I create two apps - one for use with RPXNow, and another for use with Open Graph? If not, what should I do? Thanks

    Read the article

  • What's a good tutorial for creating a gem with RSpec?

    - by ramon.tayag
    I've been searching around for ways to create a gem with RSpec, but haven't found descriptive tutorials. I started out with Ryan Bates' Making a gem, but I'm looking for a tutorial that discusses creating an acts_as style gem with RSpec. By acts_as, I mean to say that the gem adds certain methods to an existing class in Rails. Why is this important? Because I've found gem templates like New Gem, got a spec to run but when I try to test an Active Record object it starts choking. I've tried requiring active_record in spec_helper.rb but I must be doing something wrong because it doesn't solve the problem. When it comes to plugins, I found this Rails Guide. If there's a gem version for that around that'd be awesome. Thanks guys! P.S. I love screencasts.

    Read the article

  • send arrow keys using ganymed ssh java

    - by José Ramón Pérez Rubio
    I am using Ganymed ssh to connect to a remote machine and apart from sending commands I need to send the arrows keys (left and right keys). I can send commands but when I send the arrows keys nothing happends. This is what I have: public boolean createShell() throws Exception { try { // ... m_session= connection.openSession(); m_commandWriter = new OutputStreamWriter(m_session.getStdin()); String encoding=m_commandWriter.getEncoding(); //encoding is UFT8 m_errorPipe=new SSHSyncPipe(m_session.getStderr()); m_outputPipe=new SSHSyncPipe(m_session.getStdout()); m_outputPipe.start(); m_errorPipe.start(); // m_session.requestPTY("bash"); m_session.requestDumbPTY(); m_session.startShell(); m_shellCreated=true; return true; } } So if I use m_commandWriter.write(ls"\r\n"); m_commandWriter.flush(); It works, but m_commandWriter.write(37);//37 is the code for left arrow m_commandWriter.flush(); Doesn't work. Does anyone know what I am doing wrong? Thank you

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

  • Finding a simple object in a low-quality image

    - by Ramon Snir
    Hi, I want to do this thing in C# (or any other .NET language), not sure how: I have an image I captured from webcam and I want to find a specific simple object in it (let's say a red circle with a black square in it). The red circle can be a bit different from time to time (because of shadows) and the square might be also a bit brighter sometimes and even rotated a bit. Please help me!

    Read the article

  • WP7 PanoramaItem with Icon/Button next to Header

    - by Ramon Bosch
    Windows Phone 7's People hub has an "all" panorama item with "search" and a "new" buttons right next to the header/title. I can't seem to accomplish this with PanoramaItem in Visual Studio using the standard Panorama control. I don't know enough Silverlight/WPF either to be able to position something manually and control transitions/movement correctly. How can I set a button (or any object, for that matter) to go alongside the header of a wp7 PanoramaItem? Thanks!

    Read the article

  • Are there any changes in the licensing of Visual Studio 2013 Express editions?

    - by Ramón García-Pérez
    As was going through reading the license.htm file provided as part of the VS2013_RTM_WebExp_ENU.iso offline installation media for the Visual Studio 2013 Express for Web, section 6 reads as follows: 6. PACKAGE MANAGER AND THIRD PARTY SOFTWARE INSTALLATION FEATURES. The software includes the following features (each a “Feature”), each of which enables you to obtain software applications or packages through the Internet from other sources: Extension Manager, New Project Dialog, Web Platform Installer, and Microsoft NuGet-Based Package Manager. Those software applications and packages are offered and distributed in some cases by third parties and in some cases by Microsoft, but each such application or package is under its own license terms. Microsoft is not developing, distributing or licensing any of the third-party applications or packages to you, but instead, as a convenience, enables you to use the Features to access or obtain those applications or packages directly from the third-party application or package providers. By using the Features, you acknowledge and agree that: you are obtaining the applications or packages from such third parties and under separate license terms applicable to each application or package (including, with respect to the package-manager Features, any terms applicable to software dependencies that may be included in the package); MICROSOFT MAKES NO REPRESENTATIONS, WARRANTIES OR GUARANTEES AS TO THE FEED OR GALLERY URL, ANY FEEDS OR GALLERIES FROM SUCH URL, THE INFORMATION CONTAINED THEREIN, OR ANY SOFTWARE APPLICATIONS OR PACKAGES REFERENCED IN OR ACCESSED BY YOU THROUGH SUCH FEEDS OR GALLERIES. MICROSOFT GRANTS YOU NO LICENSE RIGHTS FOR THIRD-PARTY SOFTWARE APPLICATIONS OR PACKAGES THAT ARE OBTAINED USING THE FEATURES. Are there any changes in the licensing of Visual Studio 2013 Express editions? If so, does this means that Visual Studio extensions installation in Express Editions is now allowed? PS: Previous versions of the Express editions did not allow the installation of extensions as per "EULA/TOS" discussed here: Limitations of Visual Studio 2012 Express Desktop

    Read the article

  • How do I reload servlet classes without restarting my web application?

    - by Ramon
    I want to reload just my web layer classes without reloading my service layer classes (which take longer to initialize and change less frequently). There are no references from my service layer into the web layer and I can create a whole new instance of the web layer without problems. I can conceive of a solution involving complicated class-loader tricks to isolate the web layer in its own class-loader and I think this is probably the only way to do this so what I'm asking is, is there a library out there which does this? I know about JavaRebel - I don't need that much power and I'm really looking for a more lightweight free solution.

    Read the article

  • Change find() type of contained model or array transformation

    - by Ramon Marco Navarro
    I have the following model associations: Response->Survey Response->Question Response->Choice Survey->Question Question->Choice I want to create a form where I could answer all the questions for one survey. So I used the following to return the needed data: $questions = $this->Response->Question->find('all', array( 'conditions' => array('survey_id' => $id), 'contain' => array('Choice') ) ); Sample output for debug($questions). Questions Is there a contain() option so that an associated model returns in the find('list') format so that I could use: foreach($question as $questions) { $this-Form-select('field_name', $question['Choice']); } If no option is available, how could I do this using PHP's builting array methods? PS: The foreach block won't turn into a code block. If someone could edit and fix it, please do so and delete this line. Thank you.

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    By Brian Dayton on April 12, 2010 11:15 AM Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: · "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." · "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." · "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by Brian Dayton
    Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: ·         "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." ·         "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." ·         "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.  

    Read the article

  • Good Scoop: The PeopleSoft/IBM Backstory

    - by [email protected]
    Sometimes you're searching for something online and you find an unrelated, bonus nugget. Last week I stumbled across an interesting blog post from Chris Heller of a PeopleSoft consulting shop in San Ramon, CA called Grey Sparling. I don't know these guys. But Chris, who apparently used to work on the PeopleTools team, wrote a great article on a pre-acquisition, would-be deal between IBM and PeopleSoft that would have standardized PeopleSoft on IBM technology. The behind-the-scenes perspective is interesting. His commentary on the challenges that the company and PeopleSoft customers would have encountered if the deal had gone through was also interesting: ·         "No common ownership. It's hard enough to get large groups of people to work together when they work for the same company, but with two separate companies it is much, much harder. Even within Oracle, progress on Fusion applications was slow until Thomas Kurian took over Fusion applications in addition to Fusion middleware." ·         "No customer buy-in. PeopleSoft customers weren't asking for a conversion to WebSphere, so the fact that doing that could have helped PeopleSoft stay independent wouldn't have meant much to them, especially since the cost of moving to whatever a "PeopleSoft built on WebSphere" would have been significant." ·         "No executive buy-in. This is related to the previous point, but it's worth calling out separately. If Oracle had walked away and the deal with IBM had gone through, and PeopleSoft customers got put through the wringer as part of WebSphere move, all of the PeopleSoft project teams would be put in the awkward position of explaining to their management why these additional costs and headaches were happening. Essentially they would need to "sell" the partnership internally to their own management team. That's not a fun conversation to have." I'm not surprised that something like this was in the works. But I did find the inside scoop and Heller's perspective on the challenges particularly interesting. Especially the advantages of aligning development of applications and infrastructure development under one roof. Here's a link to the whole blog entry.  

    Read the article

  • Cientos de Directores Financieros se congregaron en el evento “Innovación y Excelencia en la Función Financiera”

    - by Noelia Gomez
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} El pasado 24 de Octubre tuvo lugar el evento “Innovación y Excelencia en la Función Financiera” en la Fundación Rafael de Pino, Madrid (que ya anunciamos aquí). APD, en colaboración con Oracle, organizaron esta jornada con el objetivo de analizar el proceso de transformación del Director Financiero en las compañías (aquí puedes ver un estudio sobre ello). Enrique Sanchez de Leon, Director de APD, fue el encargado de abrir la jornada con una calurosa bienvenida a los invitados. Tras él, Fernando Rumbero, Iberia Applications Cluster Leader de Oracle , comenzó dando unas pinceladas sobre los cambios a los que los Directores Financieros deben estar preparados para convertirse en parte de la estrategia de la compañía. Después de que todos los ponentes fueran presentados y se acomodaran en su lugar del escenario de aquella gran sala, Oriol Farré, Presales Director de Oracle, tomó la palabra para profundizar sobre el nuevo rol estratégico del Director Financiero y cómo éste se está convirtiendo cada vez más en el catalizador del cambio dentro de las empresas (¿tú lo eres? aquí hablamos de cómo puedes evaluarlo) Por su parte, Maria Jesús Carrato, Profesora de Dirección Financiera Internacional en el IE y Directora Financiera del Grupo SM mostró su visión sobre cómo serán los Departamentos Financieros del futuro. Después llego el turno de Ramón Arguelaguet, Financial Controller & Reporting Senior Manager de Vodafone, que profundizo en la innovación y la transformación lideradas por los Directores Financieros dentro de las organizaciones. Por último, pero no menos importante, Juan Jesús Donoso, Director Económico de Cruz Roja Española, nos mostro el punto de vista de la gestión de una organización sin ánimo de lucro. Finalmente, en la mesa redonda, cada uno de los integrantes dio su punto de vista sobre el nuevo rol de Director Financiero y los nuevos retos a los que se enfrentan. El broche final de la jornada la puso el coctel para abrir paso a un espacio de networking que sin duda los cientos de Directores Financieros aprovecharon para intercambiar puntos de vista, conocer a nuevos compañeros y reencontrarse con muchos otros. Si estuviste en el evento… ¿qué te pareció? Tal vez no encontraste el momento de plantear alguna cuestión. Ahora puedes hacerlo en los comentarios y se lo trasladaremos a los ponentes. Contact 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • WCF - Increase ReaderQuoatas on REST service

    - by Christo Fur
    I have a WCF REST Service which accepts a JSON string One of the parameters is a large string of numbers This causes the following error - which is visible by tracing and using SVC Trace Viewer There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Now I've read all sorts of articles advising how to rectify this All of them recommend increasing various config settings on the server and client e.g. http://stackoverflow.com/questions/65452/error-serializing-string-in-webservice-call http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1 So my config file looks like this <webHttpBinding> <binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" > <readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/> </binding> </webHttpBinding> ... ... ... <endpoint address="/" binding="webHttpBinding" bindingConfiguration="webBinding" My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object any ideas?

    Read the article

< Previous Page | 1 2