Partnering with The Codero for complete digital payment gateway solutions

We have some great news to share with all of you. As you know we are continuously working to improve our solutions and service offering for our customers that we promised. Our entire journey depends on how we serve for our customers to develop their business rapidly with consistent improvement with our services. Today I am going to announce you a very exciting news.



Preview Technologies Limited has signed a MoU with The Codero Limited. The Codero Limited has a set of digital payment gateway solutions and specially one of their base payment gateway solutions “EasyPayWay” is now available for all of our new and existing customers who was just waiting to integrate digital payment gateway with their e-commerce website and get payment from mostly all kinds of payment system like bKash, Rocket, Credit/Debit Cards and Online Banking. This partnership is a great opportunity for Preview Technologies Limited to develop more easy, efficient and customized payment gateway solutions. Here is a solution snapshots I can’t wait to tell you.

Faster Merchant ID Create & Approval Process
All of our new and existing customer can now apply for getting a merchant account to get payment from their clients with debit/credit card, online banking and other popular payment providers.
This usually takes longer period of time to get merchant approval with lots of paperwork. But from now on, Preview Technologies Limited can process all merchant application with our partner The Codero Limited more quickly and efficiently.

Lowest Transaction Fees
We are still working to get it fixed and we will announce the transactions fees and registration pricing later but due to this partnership agreement we can now guarantee the lowest transaction and processing fees for our customers.

All Bangladeshi Popular Payment Method Supported
EasyPayWay, a solutions provided by The Codero Limited has a wide range of popular payment methods including bKash, Rocket, QCash, MCash, DBBL Nexus, Visa, MasterCard, JCB, City Bank, Payza, IFIC Mobile Banking, Discover, FastCash, etc.


Dedicated Dashboard With Extended Reporting
EasyPayWay will provide a dedicated dashboard to all of our customers where you can explore all kinds of payment and transaction related activity including, managing transaction, refund processing, withdraw request to bank, etc.

Extensive API Integration & Customization
EasyPayWay will provide a wide range of integration and customization features such as plugins, libraries, SDKs for all of our customers for better integration with existing system that we develop. Though, Preview Technologies Limited will help all of it’s customers to integrate payment gateway with their system that you shouldn’t worry about.

So ultimately, I couldn’t wait to share this extremely exciting news with you. On behalf of Preview Technologies Limited I can ensure a great flexible, efficient, reliable payment gateway solutions for all of our new and existing customers. Also besides expressing this news with all of you, I want to personally thanks to Mr. Riad and Mr. Rana from The Codero Limited team to make this happen and on behalf of our entire Preview Technologies family we hope for a greater future for both of our business in every way we can perform together.


Get OAuth 2.0 access token using Retrofit 2.x

Retrofit is an awesome and easy to use library. You can use Retrofit for any Android or Java application which need to interact with our OAuth2.0 server to get access token.

You just need to build the retrofit properly to get it done. It doesn’t require any advance knowledge. Let’s see how it can be done easily.

To get the access token using Retrofit we need to do –

Add Retrofit in your project via Maven or Gradle. Complete installation guide can be found here.

Now let’s create the Retrofit builder. And before doing that we need to create a service interface.

import retrofit2.Call;
import retrofit2.http.Field;
import retrofit2.http.FormUrlEncoded;
import retrofit2.http.POST;

public interface AccessTokenServiceInterface {

    Call<TokenResponse> getToken(@Field("client_id") String client_id, @Field("client_secret") String client_secret, @Field("scope") String scope, @Field("grant_type") String grant_type);

And now the builder.

import retrofit2.Call;
import retrofit2.Response;
import retrofit2.Retrofit;
import retrofit2.converter.gson.GsonConverterFactory;


public class Http {
    public static void main(String[] args) {
        Retrofit retrofit = new Retrofit.Builder()

        AccessTokenServiceInterface service = retrofit.create(AccessTokenServiceInterface.class);

        //grant types = client_credentials
        Call<TokenResponse> call = service.getToken("OAUTH CLIENT ID", "OAUTH CLIENT SECRET", "basic email", "client_credentials");
        try {
            Response<TokenResponse> response = call.execute();
        } catch (IOException e) {

And also we need to map the JSON response from our OAuth 2.0 server. So we need a model to map that. Let’s create model class


public class TokenResponse {

    private String tokenType;
    private Integer expiresIn;
    private String accessToken;

    public String getTokenType() {
        return tokenType;

    public void setTokenType(String tokenType) {
        this.tokenType = tokenType;

    public Integer getExpiresIn() {
        return expiresIn;

    public void setExpiresIn(Integer expiresIn) {
        this.expiresIn = expiresIn;

    public String getAccessToken() {
        return accessToken;

    public void setAccessToken(String accessToken) {
        this.accessToken = accessToken;


Now run Http.main and you will get your access token.

That’s it. Now run the Http.main and you will get the access token easily. Download these scripts from our PreviewTechnologies/access-token-retrofit GitHub repository.

This article also available on our support portal.

cURL vs “file_get_contents()” in Google App Engine – Performance analysis

We all know that to do any kinds of external HTTP request from your PHP application (deployed in Google App Engine a.k.a GAE) we have three options basically. PHP native cURL extension, “cURL Lite” provided by Google and the native http:// and https:// PHP stream handlers.

cURL requires a valid billing profile and only you can enable it in your Google Cloud paid project. And that’s why Google’s custom cURL Lite actually use Google’s urlfetch service that you can use in your application free version.

But recent days, our engineering team was just wondering which can be little bit faster among cURL or cURL Lite or PHP native PHP HTTP handler, in this sense little bit faster meaning we also count even 50ms latency. That’s why I was running some test with a single script hosted on Google App Engine (PHP Standard Runtime environment). We had lots of PHP microservice apps hosted on Google App Engine and all services at a certain time needs to talk each other via HTTP external request. But sometimes, we were aware that latency is killing some communication.

So we just built 2 files basically in PHP. One is using cURL to post some foobar json data to an external URL ( and another one was using the native http:// and https:// PHP stream handlers. Let’s see what was our experimental scripts look like.


 * Using CURL

$data = array("foo" => "bar");
$data_string = json_encode($data);

$ch = curl_init('');
//$ch = curl_init('/post.php');
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
        'Content-Type: application/json',
        'Content-Length: ' . strlen($data_string)

$result = curl_exec($ch);

 * php_http_stream.php
 * using cURL lite. It use google's urlfetch service

$url = '';
$data = array("foo" => "bar");
$data_string = json_encode($data);

$contentLength = strlen($data_string);

$headers = "accept: */*\r\n" .
    "Content-Type: application/json\r\n" .
    "Content-Length: $contentLength\r\n";

$context = [
    'http' => [
        'method' => 'POST',
        'header' => $headers,
        'content' => $data_string,
$context = stream_context_create($context);
$result = file_get_contents($url, false, $context);

And here is the trace report of those two call.

@0 ms
Name			RPCs	Total Duration (ms)
/curl_lite.php		1	450
/urlfetch.Fetch		1	333
Timestamp		xxxx-xx-xx (
Traced time 		333 ms
Untraced time 		117 ms

http/response/size	25
@0 ms
Name				RPCs	Total Duration (ms)
/curl.php			1	753
/remote_socket.Close		4	4
/remote_socket.Connect		10	157
/remote_socket.CreateSocket	4	10
/remote_socket.GetSocketOptions	1	1
/remote_socket.Poll		10	469
/remote_socket.Receive		2	2
/remote_socket.Send		2	2
/remote_socket.SetSocketOptions	1	1
Timestamp	2017-xx-xx (
Traced time 	646 ms
Untraced time 	107 ms

http/response/size		25

So what does it mean to you? It means a lot for me. Obviously cURL Lite is saving me couple milliseconds. And also I don’t need to be afraid of my “socket operation” quota that was used in cURL.

So in this, what should I say? file_get_contents() is more optimized? Of course, I am just talking how it’s performing for little external URL call with Google’s urlfetch service.

So if your application needs to interact with external service with less configuration and options, then I would prefer to use native PHP HTTP stream handler and make all external http call with file_get_contents() function. file_get_contents() use urlfetch service and you don’t need to enable cURL extension in your application.

Error: You Have Not Concluded Your Merge (merge_head exists)

Error: You Have Not Concluded Your Merge (merge_head exists)

Git a very useful version control system. While working on Git, you should face some problem that kills you few mins to study and solve. Sometimes we got this error message (Source Tree pull You have not concluded your merge (MERGE_HEAD exists)?) while we are working with any Git command. It’s a weird issue and sometimes very dangerous. Here I have shared very short tips to overcome this problem.

Why It happened?

It happened because your last pull failed to merge automatically and it became conflict state. And without fixing the conflict before next pull or push, you will see this error.

What’s the solutions?

To solve Error: You Have Not Concluded Your Merge (merge_head exists) problem, there is a simple solutions. Follow these steps –

  1. Undo the merge and again pull

    Run the following command to undo the merge.
    git merge --abort (since git version 1.7.4)
    git reset --merge (prior git version)

    error: you have not concluded your merge (merge_head exists).

  2. Resolve the conflict

    To resolve conflict using command line, you can follow this tutorial.

  3. Add & commit the merge

    To add run git add .
    and to commit run git commit -m "commit message"

  4. Pull again

    Now git pull should work without error. To pull again, run git pull

Hope, this will help you to solve your problem of error: you have not concluded your merge (merge_head exists). Please leave a comment for others if this solutions works or not! To see more git related tips & tricks click here.

Analyze Monolog logging data with Elasticsearch


As a PHP programmer, I used monolog so much for application logging purpose. I want to know what’s happening behind the scene that gives me hints to fix the error so quickly or find any bottlenecks inside my code. But sometimes, some of my application products lots of log data that needs to be analyzed to get a better understanding. I am a fan of data visualization so I love very very very big data. So when my monolog logs lots of data, then I integrate log data with elasticsearch to analyze them, search them because elasticsearch is a great tools to search or indexing large amount of data.

So here is the small snippet you can also use to ship your monolog logging data to your elasticsearch platform.

Add follownig dependencies on composer.json

    "require": {
        "ruflin/elastica": "*",
        "monolog/monolog": "*"

Setup Monolog with ElasticSearch handler

require 'vendor/autoload.php';
use Monolog\Handler\ElasticSearchHandler;
use Monolog\Logger;
use Elastica\Client;
$client = new Client();
$options = array(
    'index' => 'elastic_search_logs',
    'type' => 'elastic_doc_type',
$handler = new ElasticSearchHandler($client, $options);
$log = new Logger('application_logs');
$log->error('Write your error message');
$log->info('Write your info message');
$log->warn('Write your info message');

That’s it. Now you would be able to store and index your application log in your elasticsearch platform. Leave a comment if it help you.