Search Results

Search found 9471 results on 379 pages for 'technology tid bits'.

Page 332/379 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • Is a modem required to be programmed when using with an internet provider?

    - by Tim
    I wonder if a modem is required to be programmed when using with an internet provider? If yes, what is the purpose of programming a modem? Do both a DSL and a cable ISP both require a modem to be used in an individual home? For example, I have a Motorola modem SURFboard Model:SB5101, Customer S/N: xxx S/N? xxx HFC MAC ID: xxx USB CPE MAC ID: xxx a coil of cable and a splitter from Comcast High-Speed internet Self-Installation Kit, which were bought 5 years ago, when I purchased Comcast internet service from its retailer www.comcastoffers.com. With them, I was hoping to reduce the amount of fee by avoiding to ask Comcast people to come over to install. But I remember at that time Comcast sent its technician here, dismissed my idea of self-installation, saying they needed to use their own modem and charging me a hefty fee, and so my equipments have never been used. I haven't been using Comcast for a long time. I wonder if my modem, cable and splitter (brand new, never used) are still good to use with an internet provider such as Comcast? If needed, we can ignore their policy and just consider the technology side? Or they are not good to use and I must throw them away like trash? Thanks and regards!

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • [Wireless LAN]hostapd is giving error whwn running in target board

    - by Renjith G
    hi, I got the following error when i tried to run the hostapd command in my target board. Any idea about this? /etc # hostapd -dd hostapd.conf Configuration file: hostapd.conf madwifi_set_iface_flags: dev_up=0 madwifi_set_privacy: enabled=0 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=0 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=1 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=2 madwifi_del_key: addr=00:00:00:00:00:00 key_idx=3 Using interface ath0 with hwaddr 00:0b:6b:33:8c:30 and ssid '"RG_WLAN Testing Renjith G"' SSID - hexdump_ascii(len=27): 22 52 47 5f 57 4c 41 4e 20 54 65 73 74 69 6e 67 "RG_WLAN Testing 20 52 65 6e 6a 69 74 68 20 47 22 Renjith G" PSK (ASCII passphrase) - hexdump_ascii(len=12): 6d 79 70 61 73 73 70 68 72 61 73 65 mypassphrase PSK (from passphrase) - hexdump(len=32): 70 6f a6 92 da 9c a8 3b ff 36 85 76 f3 11 9c 5e 5d 4a 4b 79 f4 4e 18 f6 b1 b8 09 af 6c 9c 6c 21 madwifi_set_ieee8021x: enabled=1 madwifi_configure_wpa: group key cipher=1 madwifi_configure_wpa: pairwise key ciphers=0xa madwifi_configure_wpa: key management algorithms=0x2 madwifi_configure_wpa: rsn capabilities=0x0 madwifi_configure_wpa: enable WPA=0x1 WPA: group state machine entering state GTK_INIT (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=1 madwifi_set_privacy: enabled=1 madwifi_set_iface_flags: dev_up=1 ath0: Setup of interface done. l2_packet_receive - recvfrom: Network is down Wireless event: cmd=0x8b1a len=40 Register Fail Register Fail WPA: group state machine entering state SETKEYS (VLAN-ID 0) GMK - hexdump(len=32): [REMOVED] GTK - hexdump(len=32): [REMOVED] wpa_group_setkeys: GKeyDoneStations=0 WPA: group state machine entering state SETKEYSDONE (VLAN-ID 0) madwifi_set_key: alg=TKIP addr=00:00:00:00:00:00 key_idx=2 Signal 2 received - terminating Flushing old station entries madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=3 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 3) Could not connect to kernel driver. Deauthenticate all stations madwifi_sta_deauth: addr=ff:ff:ff:ff:ff:ff reason_code=2 ioctl[IEEE80211_IOCTL_SETMLME]: Invalid argument madwifi_sta_deauth: Failed to deauth STA (addr ff:ff:ff:ff:ff:ff reason 2) madwifi_set_privacy: enabled=0 madwifi_set_ieee8021x: enabled=0 madwifi_set_iface_flags: dev_up=0

    Read the article

  • Converting a WPFToolkit DataGrid from 1D list to 2D matrix

    - by user61073
    Hello - I am wondering if anyone has attempted the following or has an idea as to how to do it. I have a WPFToolkit DataGrid which is bound to an ObservableCollection of items. As such, the DataGrid is shown with as many rows in the ObservableCollection, and as many columns as I have defined in for the DataGrid. That all is good. What I now need is to provide another view of the same data, only, instead, the DataGrid is shown with as many cells in the ObservableCollection. So let's say, my ObservableCollection has 100 items in it. The original scenario showed the DataGrid with 100 rows and 1 column. In the modified scenario, I need to show it with 10 rows and 10 columns, where each cell shows the value that was in the original representation. In other words, I need to transform my 1D ObservableCollection to a 2D ObservableCollection and display it in the DataGrid. I know how to do that programmatically in the code behind, but can it be done in XAML? Let me simplify the problem a little, in case anybody can have a crack at this. The XAML below does the following: * Defines an XmlDataProvider just for dummy data * Creates a DataGrid with 10 columns o each column is a DataGridTemplateColumn using the same CellTemplate * The CellTemplate is a simple TextBlock bound to an XML element If you run the XAML below, you will find that the DataGrid ends up with 5 rows, one for each book, and 10 columns that have identical content (all showing the book titles). However, what I am trying to accomplish, albeit with a different data set, is that in this case, I would end up with one row, with each book title appearing in a single cell in row 1, occupying cells 0-4, and nothing in cells 5-9. Then, if I added more data and had 12 books in my XML data source, I would get row 1 completely filled (cells covering the first 10 titles) and row 2 would get the first 2 cells filled. Can my scenario be accomplished primarily in XAML, or should I resign myself to working in the code behind? Any guidance would greatly be appreciated. Thanks so much! <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:custom="http://schemas.microsoft.com/wpf/2008/toolkit" mc:Ignorable="d" x:Name="UserControl" d:DesignWidth="600" d:DesignHeight="400" > <UserControl.Resources> <XmlDataProvider x:Key="InventoryData" XPath="Inventory/Books"> <x:XData> <Inventory xmlns=""> <Books> <Book ISBN="0-7356-0562-9" Stock="in" Number="9"> <Title>XML in Action</Title> <Summary>XML Web Technology</Summary> </Book> <Book ISBN="0-7356-1370-2" Stock="in" Number="8"> <Title>Programming Microsoft Windows With C#</Title> <Summary>C# Programming using the .NET Framework</Summary> </Book> <Book ISBN="0-7356-1288-9" Stock="out" Number="7"> <Title>Inside C#</Title> <Summary>C# Language Programming</Summary> </Book> <Book ISBN="0-7356-1377-X" Stock="in" Number="5"> <Title>Introducing Microsoft .NET</Title> <Summary>Overview of .NET Technology</Summary> </Book> <Book ISBN="0-7356-1448-2" Stock="out" Number="4"> <Title>Microsoft C# Language Specifications</Title> <Summary>The C# language definition</Summary> </Book> </Books> <CDs> <CD Stock="in" Number="3"> <Title>Classical Collection</Title> <Summary>Classical Music</Summary> </CD> <CD Stock="out" Number="9"> <Title>Jazz Collection</Title> <Summary>Jazz Music</Summary> </CD> </CDs> </Inventory> </x:XData> </XmlDataProvider> <DataTemplate x:Key="GridCellTemplate"> <TextBlock> <TextBlock.Text> <Binding XPath="Title"/> </TextBlock.Text> </TextBlock> </DataTemplate> </UserControl.Resources> <Grid x:Name="LayoutRoot"> <custom:DataGrid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsSynchronizedWithCurrentItem="True" Background="{DynamicResource WindowBackgroundBrush}" HeadersVisibility="All" RowDetailsVisibilityMode="Collapsed" SelectionUnit="CellOrRowHeader" CanUserResizeRows="False" GridLinesVisibility="None" RowHeaderWidth="35" AutoGenerateColumns="False" CanUserReorderColumns="False" CanUserSortColumns="False"> <custom:DataGrid.Columns> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="01" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="02" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="03" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="04" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="05" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="06" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="07" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="08" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="09" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="10" /> </custom:DataGrid.Columns> <custom:DataGrid.ItemsSource> <Binding Source="{StaticResource InventoryData}" XPath="Book"/> </custom:DataGrid.ItemsSource> </custom:DataGrid> </Grid>

    Read the article

  • 'NoneType' object has no attribute 'get' error using SQLAlchemy

    - by Az
    I've been trying to map an object to a database using SQLAlchemy but have run into a snag. Version info if handy: [OS: Mac OSX 10.5.8 | Python: 2.6.4 | SQLAlchemy: 0.5.8] The class I'm going to map: class Student(object): def __init__(self, name, id): self.id = id self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.id, self.name) Background: Now, I've got a function that reads in the necessary information from a text database into these objects. The function more or less works and I can easily access the information from the objects. Before the SQLAlchemy code runs, the function will read in the necessary info and store it into the Class. There is a dictionary called students which stores this as such: students = {} students[id] = Student(<all the info from the various "reader" functions>) Afterwards, there is an "allocation" algorithm that will allocate projects to student. It does that well enough. The allocated_project remains as None if a student is unsuccessful in getting a project. SQLAlchemy bit: So after all this happens, I'd like to map my object to a database table. Using the documentation, I've used the following code to only map certain bits. I also begin to create a Session. from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('id', Integer, primary_key=True), Column('name', String) ) metadata.create_all(engine) mapper(Student, students_table) Session = sessionmaker(bind=engine) sesh = Session() Now after that, I was curious to see if I could print out all the students from my students dictionary. for student in students.itervalues(): print student What do I get but an error: Traceback (most recent call last): File "~/FYP_Tests/FYP_Tests.py", line 140, in <module> print student File "/~FYP_Tests/Parties.py", line 30, in __str__ return "%s %s" %(self.id, self.name) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/attributes.py", line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) AttributeError: 'NoneType' object has no attribute 'get' I'm at a loss as to how to resolve this issue, if it is an issue. If more information is required, please ask and I will provide it.

    Read the article

  • Can/should one disable namespace validation in Axis2 clients using ADB databinding?

    - by RJCantrell
    I have an document/literal Axis 1 service which I'm consuming with an Axis 2 client (ADB databinding, generated from WSDL2Java). It receives a valid XML response, but in parsing that XML, I get the error "Unexpected Subelement" for any type which doesn't have a namespace defined in the response. I can resolve the error by manually changing the (auto-generated by Axis2) type-validation line to only check the type name and not the namespace as well, but is there a non-manual way to skip this? This service is consumed properly by other Axis 1 clients. Here's the response, with bits in ALL CAPS having been anonymized. Some types have namespaces, some have empty-string namespaces, and some have none at all. Can I control this via the service's WSDL, or in Axis2's WSDL2Java handling of the WSDL? Is there a fundamental mismatch between Axis 1 and Axis2? The response below looks correct, except perhaps for where that type (anonymized as TWO below) is nested within itself. <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <SERVICENAMEResponse xmlns="http://service.PROJECT.COMPANY.com"> <SERVICENAMEReturn xmlns=""> <ONE></ONE> <TWO><TWO> <THREE>2</THREE> <FOUR>-10</FOUR> <FIVE>6</FIVE> <SIX>1</SIX> </TWO></TWO> <fileName></fileName> </SERVICENAMEReturn></SERVICENAMEResponse> </soapenv:Body></soapenv:Envelope> I did not have to modify the validation of the "SERVICENAMEResponse" object because it has the correct namespace specified, but I have to so for all its subelements. The manual modification that I can make (one per type) to fix this is to change: if (reader.isStartElement() && new javax.xml.namespace.QName("http://dto.PROJECT.COMPANY.com","ONE").equals(reader.getName())){ to: if (reader.isStartElement() && new javax.xml.namespace.QName("ONE").equals(reader.getName())){

    Read the article

  • CGContextDrawPDFPage doesn't seem to persist in CGContext

    - by erichf
    I am trying to access the pixels of a CGContext written to with a PDF, but the bitmap buffer doesn't seem to update. Any help would be appreciated: //Get the reference to our current page pageRef = CGPDFDocumentGetPage(docRef, iCurrentPage); //Start with a media crop, but see if we can shrink to smaller crop CGRect pdfRect1 = CGRectIntegral(CGPDFPageGetBoxRect(pageRef, kCGPDFMediaBox)); CGRect r1 = CGRectIntegral(CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox)); if (!CGRectIsEmpty(r1)) pdfRect1 = r1; int wide = pdfRect1.size.width + pdfRect1.origin.x; int high = pdfRect1.size.height + pdfRect1.origin.y; CGContextRef ctxBuffer = NULL; CGColorSpaceRef colorSpace; UInt8* bitmapData; int bitmapByteCount; int bitmapBytesPerRow; bitmapBytesPerRow = (wide * 4); bitmapByteCount = (bitmapBytesPerRow * high); colorSpace = CGColorSpaceCreateDeviceRGB(); bitmapData = malloc( bitmapByteCount ); if (bitmapData == NULL) { DebugLog (@"Memory not allocated!"); return; } ctxBuffer = CGBitmapContextCreate (bitmapData, wide, high, 8, // bits per component bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); // if (ctxBuffer== NULL) { free (bitmapData); DebugLog (@"Context not created!"); return; } CGColorSpaceRelease( colorSpace ); //White out the current context CGContextSetRGBFillColor(ctxBuffer, 1.0, 1.0, 1.0, 1.0); CGContextFillRect(ctxBuffer, CGContextGetClipBoundingBox(ctxBuffer)); CGContextDrawPDFPage(ctxBuffer, pageRef); //!!!This displays just fine to the context passed in from - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx. That is, I can see the PDf page rendered, so we know ctxBuffer was created correctly //However, if I view bitmapData in memory, it only shows as 0xFF (or whatever fill color I use) CGImageRef img = CGBitmapContextCreateImage(ctxBuffer); CGContextDrawImage(ctx, tiledLayer.frame, img); void *data = CGBitmapContextGetData (ctx); for (int i = 0; i < wide; i++) { for (int j = 0; j < high; j++) { //All of the bytes show as 0xFF (or whatever fill color I test with)?! int byteIndex = (j * 4) + i * 4; UInt8 red = bitmapData[byteIndex]; UInt8 green = bitmapData[byteIndex + 1]; UInt8 blue = bitmapData[byteIndex + 2]; UInt8 alpha = m_PixelBuf[byteIndex + 3]; } } I have also tried using CGDataProviderCopyData(CGImageGetDataProvider(img)) & CFDataGetBytePtr, but the results are the same?

    Read the article

  • Add Silverlight Bing Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

  • How to use c++0x thread in Android NDK?

    - by m-ric
    I am trying to compile this simple program with android-ndk-r8b: jni/hello_jni.cpp #include <iostream> #include <thread> void hello() { std::cout << "Hi i'm a thread!!!" << std::endl; } int main() { std::thread th(hello); th.join(); return 0; } jni/Application.mk APP_OPTIM := release APP_MODULES := hello_thread APP_STL := gnustl_static jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_CPPFLAGS += -std=c++0x -frtti LOCAL_MODULE := hello_thread LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -pthread LOCAL_SRC_FILES := hello_thread.cpp include $(BUILD_EXECUTABLE) ndk-build returns me an error arguin that 'thread' is not a member of 'std'. I issued ndk-build -n to get the compilation command and issued it alone in my shell: /home/evigier/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/include -I/home/evigier/eclipse_workspace/hello_thread/jni -DANDROID -Wa,--noexecstack -std=c++0x -frtti -O2 -DNDEBUG -g -I/home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include -c /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp -o /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o Compile++ thumb : hello_thread <= hello_thread.cpp In file included from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/stdio.h:55:0, from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/wchar.h:33, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/cwchar:46, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/bits/postypes.h:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iosfwd:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ios:39, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ostream:40, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iostream:40, from jni/hello_thread.cpp:4: /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/sys/types.h:124:9: error: 'uint64_t' does not name a type /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp: In function 'int main()': /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:5: error: 'thread' is not a member of 'std' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:17: error: expected ';' before 'th' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:15:5: error: 'th' was not declared in this scope I read a lot of threads/questions about POSIX threads and C++ threads, but still cannot find my answer. My arm-linux-androideabi/include/c++/4.6/thread file defines class thread in std only: #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1) They don't seem to be defined in my sdk (c++config.h). But how can I possibly turn them on safely? Do i need to compile my own toolchain to use (non-p)threads? My host computer is : Linux evigier-ThinkPad-X220 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Union struct produces garbage and general question about struct nomenclature

    - by SoulBeaver
    I read about unions the other day( today ) and tried the sample functions that came with them. Easy enough, but the result was clear and utter garbage. The first example is: union Test { int Int; struct { char byte1; char byte2; char byte3; char byte4; } Bytes; }; where an int is assumed to have 32 bits. After I set a value Test t; t.Int = 7; and then cout cout << t.Bytes.byte1 << etc... the individual bytes, there is nothing displayed, but my computer beeps. Which is fairly odd I guess. The second example gave me even worse results. union SwitchEndian { unsigned short word; struct { unsigned char hi; unsigned char lo; } data; } Switcher; Looks a little wonky in my opinion. Anyway, from the description it says, this should automatically store the result in a high/little endian format when I set the value like Switcher.word = 7656; and calling with cout << Switcher.data.hi << endl The result of this were symbols not even defined in the ASCII chart. Not sure why those are showing up. Finally, I had an error when I tried correcting the example by, instead of placing Bytes at the end of the struct, positioning it right next to it. So instead of struct {} Bytes; I wanted to write struct Bytes {}; This tossed me a big ol' error. What's the difference between these? Since C++ cannot have unnamed structs it seemed, at the time, pretty obvious that the Bytes positioned at the beginning and at the end are the things that name it. Except no, that's not the entire answer I guess. What is it then?

    Read the article

  • Cannot add Silverlight Maps Control to Windows Mobile 7 application

    - by Jacob
    I know the bits just came out today, but one of the first things I want to do with the newly released Windows Mobile 7 SDK is put a map up on the screen and mess around. I've downloaded the latest version of the Silverlight Maps Control and added the references to my application. As a matter of fact, the VS 2010 design view of the MainPage.xaml shows the map control after adding the namespace and placing the control. I'm using the provided VS 2010 Express version that comes with the Win Mobile 7 SDK and have just used the New Project - Windows Phone Application template. When I try to build I get two warnings related to the Microsoft.Maps.MapControl dll's. Warning 1 The primary reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". Warning 2 The primary reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the framework assembly "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e" which could not be resolved in the currently targeted framework. "Silverlight,Version=v4.0,Profile=WindowsPhone". To resolve this problem, either remove the reference "Microsoft.Maps.MapControl.Common, Version=1.0.1.0, Culture=neutral, PublicKeyToken=498d0d22d7936b73, processorArchitecture=MSIL" or retarget your application to a framework version which contains "System.Windows.Browser, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e". I'm leaning towards some way of adding the System.Windows.Browser to the targeted framework version. But I'm not even sure if that is possible. To be more specific; I'm looking for a way to get the Silverlight Maps Control up on a Windows Phone 7 series application. If possible. Thanks.

    Read the article

  • What should I do or don't do to avoid Delphi "push dword" bug.

    - by Maksee
    I found that Delphi 5 generates invalid assembly code in specific cases. I can't understand in what cases in general. The example below produces access violation since a very strange optimization occurs. For a byte in a record or array Delphi generates push dword [...], pop ebx, mov .., bl that works correctly if there are data after this byte (we need at least three to push dword correctly), but fails if the data is inaccessible. I emulated the strict boundaries here with win32 Virtual* functions Specifically the error occurs when the last byte from the block accessed inside FeedBytesToClass procedure. And if I try to change something like using data array instead of object property of remove actionFlag variable, Delphi generates correct assembly instructions. const BlockSize = 4096; type TSomeClass = class private fBytes: PByteArray; public property Bytes: PByteArray read fBytes; constructor Create; destructor Destroy;override; end; constructor TSomeClass.Create; begin inherited Create; GetMem(fBytes, BlockSize); end; destructor TSomeClass.Destroy; begin FreeMem(fBytes); inherited; end; procedure FeedBytesToClass(SrcDataBytes: PByteArray; Count: integer); var j: integer; Ofs: integer; actionFlag: boolean; AClass: TSomeClass; begin AClass:=TSomeClass.Create; try actionFlag:=true; for j:=0 to Count-1 do begin Ofs:=j; if actionFlag then begin AClass.Bytes[Ofs]:=SrcDataBytes[j]; end; end; finally AClass.Free; end; end; procedure TForm31.Button1Click(Sender: TObject); var SrcDataBytes: PByteArray; begin SrcDataBytes:=VirtualAlloc(Nil, BlockSize, MEM_COMMIT, PAGE_READWRITE); try if VirtualLock(SrcDataBytes, BlockSize) then try FeedBytesToClass(SrcDataBytes, BlockSize); finally VirtualUnLock(SrcDataBytes, BlockSize); end; finally VirtualFree(SrcDataBytes, MEM_DECOMMIT, BlockSize); end; end; Initially the error occured when I used access to RGB data of bitmap bits, but the code there is too complex so I narrowed it to this fragment. So the question is what is here so specific that makes Delphi produce push,pop,mov optimization. I need to know this in order to avoid such side effects in general.

    Read the article

  • compile AMR-nb codec with RVCT for WinCE/Window Mobile

    - by pps
    Hello everybody, I'm working on amr speech codec (porting/optimization) I have an arm (for WinCE) optimized version from voiceage and I use it as a reference in performance testing. So far, binary produced with my lib beats the other one by around 20-30%! I use Vs2008 and I have limited access to ARM instruction set I can generate with Microsoft compiler. So I tried to look for alternative compiler to see what would be performance difference. I have RVCT compiler, but it produces elf binaries/object files. However, I run my test on a wince mobile phone (TyTn 2) so I need to find a way to run code compiled with RVCT on WinCE. Some of the options are 1) to produce assembly listing (-S option of armcc), and try to assemble with some other assembler that can create COFF (MS assembler for arm) 2) compile and convert generated ELF object file to COFF object (seems like objcopy of gnu binutils could help me with that) 3) using fromelf utility supplied by RVCT create BIN file and somehow try to mangle the bits so I can execute them ;) My first attempt is to create a simple c++ file with one exported function, compile it with RVCT and then try to run that function on the smartphone. The emitted assembly cannot be assembled by the ms assembler (not only they are not compatible, but also ms assembler rejects some of the instructions generated with RVCT compiler; ASR opcode in my case) Then I tried to convert ELF object to coff format and I can't find any information on that. There is a gcc port for ce and objcopy from that toolset is supposed to be able to do the task. However, I can't get it working. I tried different switches, but I have no idea what exactly I need to specify as bfdname for input and output format. So, I couldn't get it working either. Dumping with fromelf and using generated bin file seems to be overkill, so I decided to ask you guys if there is anything I should try to do or maybe someone has already done similar task and could help me. Basically, all I want to do is to compile my code with RVCT compiler and see what's the performance difference. My code has zero dependencies on any c runtime functions. thanks!

    Read the article

  • Install Trac on 64bits Windows 7

    - by Tufo
    I'm configuring a new Developing Server that came with Windows 7 64bits. It must have installed Trac with Subversion integration. I install Subversion with VisualSVN 2.1.1, clients with TortoiseSVN 1.6.7 and AnkhSVN 2.1.7 for Visual Studio 2008 SP1 integration. All works fine! my problem begun when going to Trac installation. I install python 2.6 all fine. Trac hasn't a x64 windows installer, so I installed it manually by compiling it with python console (C:\Python26\python.exe C:/TRAC/setup.py install). After that, I can create TRAC projects normally, the Trac core is working fine. And so the problem begins, lets take a look at the Trac INSTALL file: Requirements To install Trac, the following software packages must be installed: Python, version = 2.3. Subversion, version = 1.0. (= 1.1.xrecommended) Subversion SWIG Python bindings (not PySVN). PySQLite,version 1.x (for SQLite 2.x) or version 2.x (for SQLite 3.x) Clearsilver, version = 0.9.3 (0.9.14 recommended) Python: OK Subverion: OK Subversion SWIG Python bindings (not PySVN): Here I face the first issue, he asks me for 'cd' to the swig directory and run the 'configure' file, and the result is: C:\swigwin-1.3.40> c:\python26\python.exe configure File "configure", line 16 DUALCASE=1; export DUALCASE # for MKS sh ^ SyntaxError: invalid syntax PySQLite, version 1.x (for SQLite 2.x) or version 2.x (for SQLite 3.x): Don't need, as Python 2.6 comes with SQLLite Clearsilver, version = 0.9.3 (0.9.14 recommended): Second issue, Clearsilver only has 32bit installer wich does not recognize python installation (as registry keys are in different places from 32 to 64 bits). So I try to manually install it with python console. It returns me a error of the same kind as SWIG: C:\clearsilver-0.10.5>C:\python26\python.exe ./configure File "./configure", line 13 if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then ^ SyntaxError: invalid syntax When I simulate a web server using the "TRACD" command, it runs fine when I disable svn support but when I try to open the web page it shows me a error regarding ClearSilver is not installed for generating the html content. AND (for making me more happy) This TRAC will run over IIS7, I mustn't install Apache... I'm nearly crazy with this issue... HELP!!!

    Read the article

  • What Causes Boost Asio to Crash Like This?

    - by Scott Lawson
    My program appears to run just fine most of the time, but occasionally I get a segmentation fault. boost version = 1.41.0 running on RHEL 4 compiled with GCC 3.4.6 Backtrace: #0 0x08138546 in boost::asio::detail::posix_fd_set_adapter::is_set (this=0xb74ed020, descriptor=-1) at /home/scottl/boost_1_41_0/boost/asio/detail/posix_fd_set_adapter.hpp:57 __result = -1 'ÿ' #1 0x0813e1b0 in boost::asio::detail::reactor_op_queue::perform_operations_for_descriptors (this=0x97f3b6c, descriptors=@0xb74ed020, result=@0xb74ecec8) at /home/scottl/boost_1_41_0/boost/asio/detail/reactor_op_queue.hpp:204 op_iter = {_M_node = 0xb4169aa0} i = {_M_node = 0x97f3b74} #2 0x081382ca in boost::asio::detail::select_reactor::run (this=0x97f3b08, block=true) at /home/scottl/boost_1_41_0/boost/asio/detail/select_reactor.hpp:388 read_fds = {fd_set_ = {fds_bits = {16, 0 }}, max_descriptor_ = 65} write_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} retval = 1 lock = { = {}, mutex_ = @0x97f3b1c, locked_ = true} except_fds = {fd_set_ = {fds_bits = {0 }}, max_descriptor_ = -1} max_fd = 65 tv_buf = {tv_sec = 0, tv_usec = 710000} tv = (timeval *) 0xb74ecf88 ec = {m_val = 0, m_cat = 0x81f2c24} sb = { = {}, blocked_ = true, old_mask_ = {__val = {0, 0, 134590223, 3075395548, 3075395548, 3075395464, 134729792, 3075395360, 135890240, 3075395368, 134593920, 3075395544, 135890240, 3075395384, 134599542, 3020998404, 135890240, 3075395400, 134614095, 3075395544, 4, 3075395416, 134548135, 3021172996, 4294967295, 3075395432, 134692921, 3075395504, 0, 3075395448, 134548107, 3021172992}}} #3 0x0812eb45 in boost::asio::detail::task_io_service ::do_one (this=0x97f3a70, lock=@0xb74ed230, this_idle_thread=0xb74ed240, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:260 more_handlers = false c = {lock_ = @0xb74ed230, task_io_service_ = @0x97f3a70} h = (boost::asio::detail::handler_queue::handler *) 0x97f3aa0 polling = false task_has_run = true #4 0x0812765f in boost::asio::detail::task_io_service ::run (this=0x97f3a70, ec=@0xb74ed2c0) at /home/scottl/boost_1_41_0/boost/asio/detail/task_io_service.hpp:103 ctx = { = {}, owner_ = 0x97f3a70, next_ = 0x0} this_idle_thread = {wakeup_event = { = {}, cond_ = {__c_lock = { __status = 0, __spinlock = 22446}, __c_waiting = 0x2bd7, __padding = "\000\000\000\000×+\000\000\000\000\000\000×+\000\000\000\000\000\000\204:\177\t\000\000\000", __align = 0}, signalled_ = true}, next = 0x0} lock = { = {}, mutex_ = @0x97f3a84, locked_ = false} n = 11420 #5 0x08125e99 in boost::asio::io_service::run (this=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/asio/impl/io_service.ipp:58 ec = {m_val = 0, m_cat = 0x81f2c24} s = 8 #6 0x08154424 in boost::_mfi::mf0::operator() (this=0x9800870, p=0x97ebbcc) at /home/scottl/boost_1_41_0/boost/bind/mem_fn_template.hpp:49 No locals. #7 0x08154331 in boost::_bi::list1 ::operator(), boost::_bi::list0 (this=0x9800878, f=@0x9800870, a=@0xb74ed337) at /home/scottl/boost_1_41_0/boost/bind/bind.hpp:236 No locals. #8 0x081541e5 in boost::_bi::bind_t, boost::_bi::list1 ::operator() (this=0x9800870) at /home/scottl/boost_1_41_0/boost/bind/bind_template.hpp:20 a = {} #9 0x08154075 in boost::detail::thread_data, boost::_bi::list1 ::run (this=0x98007a0) at /home/scottl/boost_1_41_0/boost/thread/detail/thread.hpp:56 No locals. #10 0x0816fefd in thread_proxy () at /usr/lib/gcc/i386-redhat-linux/3.4.6/../../../../include/c++/3.4.6/bits/locale_facets.tcc:2443 __ioinit = {static _S_refcount = , static _S_synced_with_stdio = } ---Type to continue, or q to quit--- typeinfo for common::RuntimeException = {} typeinfo name for common::RuntimeException = "N6common16RuntimeExceptionE" #11 0x00af23cc in start_thread () from /lib/tls/libpthread.so.0 No symbol table info available. #12 0x00a5c96e in __init_misc () from /lib/tls/libc.so.6 No symbol table info available.

    Read the article

  • Grails Liferay portlet not invoking action

    - by RJ Regenold
    I am trying to create a simple portlet for Liferay 5.2.2 using Grails 1.2.1 with the grails-portlets 0.7 and grails-portlets-liferay 0.2 plugins. I created and deployed a stock portlet (just updated title, description, etc...). It deploys correctly and the view renders correctly. However, when I submit the default form that is in view.gsp it never hits the actionView function. Here are the relevant code bits: SearchPortlet.groovy class SearchPortlet { def title = 'Search' def description = ''' A simple search portlet. ''' def displayName = 'Search' def supports = ['text/html':['view', 'edit', 'help']] // Liferay server specific configurations def liferay_display_category = 'Category' def actionView = { println "In action view" } def renderView = { println "In render view" //TODO Define render phase. Return the map of the variables bound to the view ['mykey':'myvalue'] } ... } view.gsp <%@ taglib uri="http://java.sun.com/portlet" prefix="portlet" %> <div> <h1>View Page</h1> The map returned by renderView is passed in. Value of mykey: ${mykey} <form action="${portletResponse.createActionURL()}"> <input type="submit" value="Submit"/> </form> </div> The tomcat terminal prints In render view whenever I view the portlet, and after I press the submit button. It never prints the In action view statement. Any ideas? Update I turned on logging and this is what I see whenever I click the submit button in the portlet: [localhost].[/gportlet] - servletPath=/Search, pathInfo=/invoke, queryString=null, name=null [localhost].[/gportlet] - Path Based Include portlets.GrailsDispatcherPortlet - DispatcherPortlet with name 'Search' received render request portlets.GrailsDispatcherPortlet - Bound render request context to thread: com.liferay.portlet.RenderRequestImpl@7a158e portlets.GrailsDispatcherPortlet - Testing handler map [org.codehaus.grails.portlets.GrailsPortletHandlerMapping@1f06283] in DispatcherPortlet with name 'Search' portlets.GrailsDispatcherPortlet - Testing handler adapter [org.codehaus.grails.portlets.GrailsPortletHandlerAdapter@74f72b] portlets.GrailsPortletHandlerAdapter - portlet.handleMinimised not set, proceeding with normal render portlet.SearchPortlet - In render view portlets.GrailsPortletHandlerAdapter - Couldn't resolve action view /search/null.gsp portlets.GrailsPortletHandlerAdapter - Trying to render mode view /search/view.gsp portlets.GrailsDispatcherPortlet - Setting portlet response content type to view-determined type [text/html;charset=ISO-8859-1] [localhost].[/gportlet] - servletPath=/WEB-INF/servlet/view, pathInfo=null, queryString=null, name=null [localhost].[/gportlet] - Path Based Include portlets.GrailsDispatcherPortlet - Cleared thread-bound render request context: com.liferay.portlet.RenderRequestImpl@7a158e portlets.GrailsDispatcherPortlet - Successfully completed request The fourth line in that log snippet says Bound render request..., which I don't understand because the action in the form that is in the portlet is to the action url. I would've thought that should be an action request.

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Glassfish V3 won't start

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 and tried to run the GlasshFish V3 server. I'm working under Windows Vista 32 Bits. First, it won't run. Then I modified the c:\Windows\System32\drivers\etc\hosts file and put the following line into it: 127.0.0.1 localhost And when I run the GlasshFish V3 Server, no error is showing but only "INFOs" are displayed: 3 avr. 2010 19:23:19 com.sun.enterprise.glassfish.bootstrap.ASMain main INFO: Launching GlassFish on Felix platform Welcome to Felix ================ INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:24 CEST 2010 INFO: Starting Grizzly Framework 1.9.18-k - Sat Apr 03 19:23:25 CEST 2010 INFO: Grizzly Framework 1.9.18-k started in: 423ms listening on port 35127 INFO: GlassFish v3 (74.2) startup time : Felix(4456ms) startup services(1709ms) total(6165ms) INFO: Grizzly Framework 1.9.18-k started in: 459ms listening on port 35116 INFO: Grizzly Framework 1.9.18-k started in: 428ms listening on port 35155 INFO: Grizzly Framework 1.9.18-k started in: 470ms listening on port 35160 INFO: Grizzly Framework 1.9.18-k started in: 513ms listening on port 35159 INFO: javassist.util.proxy.ProxyFactory.classLoaderProvider = org.glassfish.weld.WeldActivator$GlassFishClassLoaderProvider@5be8f4 INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 INFO: Binding RMI port to *:35165 INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. INFO: JMXStartupService: Started JMXConnector, JMXService URL = service:jmx:rmi://PC-de-Charlotte:35165/jndi/rmi://PC-de-Charlotte:35165/jmxrmi INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate INFO: [Thread[GlassFish Kernel Main Thread,5,main]] started INFO: Grizzly Framework 1.9.18-k started in: 150ms listening on port 35159 INFO: Perform lazy SSL initialization for the listener 'http-listener-2' INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Program Files\sges-v3\glassfish\modules\autostart, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-330907148519261411, felix.fileinstall.filter = null} INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-2938963288421854459, felix.fileinstall.filter = null} INFO: Grizzly Framework 1.9.18-k started in: 95ms listening on port 35160 INFO: Updating configuration from org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: Installed C:\Program Files\sges-v3\glassfish\modules\autostart\org.apache.felix.fileinstall-autodeploy-bundles.cfg INFO: {felix.fileinstall.poll (ms) = 5000, felix.fileinstall.dir = C:\Users\Charlotte\.netbeans\6.8\GlassFish_v3\autodeploy\bundles, felix.fileinstall.debug = 1, felix.fileinstall.bundles.new.start = true, felix.fileinstall.tmpdir = C:\Users\CHARLO~1\AppData\Local\Temp\fileinstall-6474085409014899009, felix.fileinstall.filter = null} And there is no message such as "Glassfish started"! So, when I try to access to the admin web interface: localhost:4848 or localhost:8080 or localhost:8181 , It doesn't work. What should I do? Thank you very much, Regards.

    Read the article

  • Problem inserting JavaScript into a CKEditor Instance running inside a jQuery-UI Dialog Box

    - by PlasmaFlux
    Hello Overflowers! Here's what's going on: I'm building an app (using PHP, jQuery, etc), part of which lets users edit various bits of a web page (Header, Footer, Main Content, Sidebar) using CKEditor. It's set up so that each editable bit has an "Edit Content" button in the upper right that, on click, launches an instance of CKEditor inside a jQuery-UI Dialog box. After the user is done editing, they can click an 'Update Changes' button that passes the edited content back off to the main page and closes the Dialog/CKeditor. It's all working magnificently, save for one thing. If someone inserts any JavaScript code, wrapped in 'script' tags, using either the Insert HTML Plugin for CKEditor or by going to 'Source' in CKEditor and placing the code in the source, everything seems okay until they click the 'Update Changes' button. The JavaScript appears to be inserting properly, but when 'Update Changes' is clicked, instead of the Dialog closing and passing the script back into the div where it belongs, what I get instead is an all-white screen with just the output of the JavaScript. For example, a simple 'Hello World' script results in a white screen with the string 'Hello World' in the upper left corner; for more intricate scripts, like an API call to, say Aweber, that generates a newsletter signup form, I get an all-white screen with the resulting form from the Aweber API call perfectly rendered in the middle of the screen. One of the most confusing parts is that, on these pages, if I click 'View Source', I get absolutely nothing. Blank emptiness. Here's all my code that handles the launching of the CKEditor instance inside the jQuery-UI Dialog, and the passing of the edited data back into its associated div on click of the 'Update Changes' button: $(function() { $('.foobar_edit_button') .button() .click(function() { var beingEdited = $(this).nextAll('.foobar_editable').attr('id'); var content = $(this).nextAll('.foobar_editable').html(); $('#foobar_editor').html(content); $('#foobar_editor').dialog( { open:function() { $(this).ckeditor(function() { CKFinder.SetupCKEditor( this, '<?php echo '/foobar/lib/editor/ckfinder/'; ?>' ); }); }, close:function() { $(this).ckeditorGet().destroy(); }, autoOpen: false, width: 840, modal: true, buttons: { 'Update Changes': function() { // TODO: submit changes to server via ajax once its completed: for ( instance in CKEDITOR.instances ) CKEDITOR.instances[instance].updateElement(); var edited_content = $('#foobar_editor').html(); $('#' + beingEdited).html(edited_content); $(this).dialog('close'); } } }); $('#foobar_editor').dialog('open'); }); }); I'm all sorts of confused. If anyone can point me in the right direction, it will be greatly appreciated. Thanks!

    Read the article

  • javascript problems when generating html reports from within java

    - by posdef
    Hi, I have been working on a Java project in which the reports will be generated in HTML, so I am implementing methods for creating these reports. One important functionality is to be able to have as much info as possible in the tables, but still not clutter too much. In other words the details should be available if the user wishes to take a look at them but not necessarily visible by default. I have done some searching and testing and found an interesting template for hiding/showing content with the use of CSS and javascript, the problem is that when I try the resultant html page the scripts dont work. I am not sure if it's due a problem in Java or in the javascript itself. I have compared the html code that java produces to the source where I found the template, they seem to match pretty well. Below are bits of my java code that generates the javascript and the content, i would greatly appreciate if anyone can point out the possible reasons for this problem: //goes to head private void addShowHideScript() throws IOException{ StringBuilder sb = new StringBuilder(); sb.append("<script type=\"text/javascript\" language=\"JavaScript\">\n"); sb.append("<!--function HideContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"none\";}\n"); sb.append("function ShowContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"block\";}\n"); sb.append("function ReverseDisplay(d) {\n"); sb.append("if(document.getElementById(d).style.display==\"none\")\n"); sb.append("{ document.getElementById(d).style.display=\"block\"; }\n"); sb.append("else { document.getElementById(d).style.display=\"none\"; }\n}\n"); sb.append("//--></script>\n"); out.write(sb.toString()); out.newLine(); } // body private String linkShowHideContent(String pathname, String divname){ StringBuilder sb = new StringBuilder(); sb.append("<a href=\"javascript:ReverseDisplay('"); sb.append(divname); sb.append("')\">"); sb.append(pathname); sb.append("</a>"); return sb.toString(); } // content out.write(linkShowHideContent("hidden content", "ex")); out.write("<div id=\"ex\" style=\"display:block;\">"); out.write("<p>Content goes here.</p></div>");

    Read the article

  • error C2065: 'CComQIPtr' : undeclared identifier

    - by Ken Smith
    I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects). I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this: #include "stdafx.h" #define WIN32_LEAN_AND_MEAN #include <Windows.h> #include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure) #if _MSC_VER < 1400 #define _WIN32_IE 0x0400 #endif #include <atlbase.h> // ATL smart pointers #include <shlguid.h> // shell GUIDs #include <shlobj.h> // IActiveDesktop #include "stdhdrs.h" struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop; #define PACKVERSION(major,minor) MAKELONG(minor,major) static HRESULT EnableActiveDesktop(bool enable) { CoInitialize(NULL); HRESULT hr; CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275) hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER); if (!SUCCEEDED(hr)) { return hr; } COMPONENTSOPT opt; opt.dwSize = sizeof(opt); opt.fActiveDesktop = opt.fEnableComponents = enable; hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0); if (!SUCCEEDED(hr)) { CoUninitialize(); // pIActiveDesktop->Release(); return hr; } hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH); CoUninitialize(); // pIActiveDesktop->Release(); return hr; } This code is throwing the following compiler errors: error C2065: 'CComQIPtr' : undeclared identifier error C2275: 'IActiveDesktop' : illegal use of this type as an expression error C2065: 'pIActiveDesktop' : undeclared identifier The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.) My current workaround is to use a normal "dumb" pointer, like so: IActiveDesktop *pIActiveDesktop; HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop, NULL, // no outer unknown CLSCTX_INPROC_SERVER, IID_IActiveDesktop, (void**)&pIActiveDesktop); And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff. Any thoughts?

    Read the article

  • "is not abstact and does not override abstract method."

    - by Chris Bolton
    So I'm pretty new to android development and have been trying to piece together some code bits. Here's what I have so far: package com.teslaprime.prirt; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.AdapterView.OnItemClickListener; import java.util.ArrayList; import java.util.List; public class TaskList extends Activity { List<Task> model = new ArrayList<Task>(); ArrayAdapter<Task> adapter = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Button add = (Button) findViewById(R.id.add); add.setOnClickListener(onAdd); ListView list = (ListView) findViewById(R.id.tasks); adapter = new ArrayAdapter<Task>(this,android.R.layout.simple_list_item_1,model); list.setAdapter(adapter); list.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(View v, int position, long id) { adapter.remove(position); } });} private View.OnClickListener onAdd = new View.OnClickListener() { public void onClick(View v) { Task task = new Task(); EditText name = (EditText) findViewById(R.id.taskEntry); task.name = name.getText().toString(); adapter.add(task); } }; } and here are the errors I'm getting: compile: [javac] /opt/android-sdk/tools/ant/main_rules.xml:384: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds [javac] Compiling 2 source files to /home/chris-kun/code/priRT/bin/classes [javac] /home/chris-kun/code/priRT/src/com/teslaprime/prirt/TaskList.java:30: <anonymous com.teslaprime.prirt.TaskList$1> is not abstract and does not override abstract method onItemClick(android.widget.AdapterView<?>,android.view.View,int,long) in android.widget.AdapterView.OnItemClickListener [javac] list.setOnItemClickListener(new OnItemClickListener() { [javac] ^ [javac] /home/chris-kun/code/priRT/src/com/teslaprime/prirt/TaskList.java:32: remove(com.teslaprime.prirt.Task) in android.widget.ArrayAdapter<com.teslaprime.prirt.Task> cannot be applied to (int) [javac] adapter.remove(position); [javac] ^ [javac] 2 errors BUILD FAILED /opt/android-sdk/tools/ant/main_rules.xml:384: Compile failed; see the compiler error output for details. Total time: 2 seconds any ideas?

    Read the article

  • Writing Device Drivers for Microcontrollers, where to define IO Port pins?

    - by volting
    I always seem to encounter this dilemma when writing low level code for MCU's. I never know where to declare pin definitions so as to make the code as reusable as possible. In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC. Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver. Iv done a lot of research trying to figure out the best approach but havent really found anything. Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome. Just a snippet of the driver so you get the idea /** @brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits @param dac_data A 12bit word @param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0 @param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096) */ void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select) { unsigned char low_byte=0, high_byte=0; CS = 0; /**Select the chip*/ high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/ if(ip_buf_unbuf_select) high_byte |= (0x01 << 6); if(gain_select) high_byte |= (0x01 << 5); high_byte |= ((dac_data >> 8) & 0x0F); low_byte |= dac_data; SPI_master_byte(high_byte); SPI_master_byte(low_byte); CS = 1; LDAC = 0; /**Latch the Data*/ LDAC = 1; }

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >